Sort by:
Page 70 of 73727 results

The utility of low-dose pre-operative CT of ovarian tumor with artificial intelligence iterative reconstruction for diagnosing peritoneal invasion, lymph node and hepatic metastasis.

Cai X, Han J, Zhou W, Yang F, Liu J, Wang Q, Li R

pubmed logopapersMay 13 2025
Diagnosis of peritoneal invasion, lymph node metastasis, and hepatic metastasis is crucial in the decision-making process of ovarian tumor treatment. This study aimed to test the feasibility of low-dose abdominopelvic CT with an artificial intelligence iterative reconstruction (AIIR) for diagnosing peritoneal invasion, lymph node metastasis, and hepatic metastasis in pre-operative imaging of ovarian tumor. This study prospectively enrolled 88 patients with pathology-confirmed ovarian tumors, where routine-dose CT at portal venous phase (120 kVp/ref. 200 mAs) with hybrid iterative reconstruction (HIR) was followed by a low-dose scan (120 kVp/ref. 40 mAs) with AIIR. The performance of diagnosing peritoneal invasion and lymph node metastasis was assessed using receiver operating characteristic (ROC) analysis with pathological results serving as the reference. The hepatic parenchymal metastases were diagnosed and signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were measured. The perihepatic structures were also scored on the clarity of porta hepatis, gallbladder fossa and intersegmental fissure. The effective dose of low-dose CT was 79.8% lower than that of routine-dose scan (2.64 ± 0.46 vs. 13.04 ± 2.25 mSv, p < 0.001). The low-dose AIIR showed similar area under the ROC curve (AUC) with routine-dose HIR for diagnosing both peritoneal invasion (0.961 vs. 0.960, p = 0.734) and lymph node metastasis (0.711 vs. 0.715, p = 0.355). The 10 hepatic parenchymal metastases were all accurately diagnosed on the two image sets. The low-dose AIIR exhibited higher SNR and CNR for hepatic parenchymal metastases and superior clarity for perihepatic structures. In low-dose pre-operative CT of ovarian tumor, AIIR delivers similar diagnostic accuracy for peritoneal invasion, lymph node metastasis, and hepatic metastasis, as compared to routine-dose abdominopelvic CT. It is feasible and diagnostically safe to apply up to 80% dose reduction in CT imaging of ovarian tumor by using AIIR.

Segmentation of renal vessels on non-enhanced CT images using deep learning models.

Zhong H, Zhao Y, Zhang Y

pubmed logopapersMay 13 2025
To evaluate the possibility of performing renal vessel reconstruction on non-enhanced CT images using deep learning models. 177 patients' CT scans in the non-enhanced phase, arterial phase and venous phase were chosen. These data were randomly divided into the training set (n = 120), validation set (n = 20) and test set (n = 37). In training set and validation set, a radiologist marked out the right renal arteries and veins on non-enhanced CT phase images using contrast phases as references. Trained deep learning models were tested and evaluated on the test set. A radiologist performed renal vessel reconstruction on the test set without the contrast phase reference, and the results were used for comparison. Reconstruction using the arterial phase and venous phase was used as the gold standard. Without the contrast phase reference, both radiologist and model could accurately identify artery and vein main trunk. The accuracy was 91.9% vs. 97.3% (model vs. radiologist) in artery and 91.9% vs. 100% in vein, the difference was insignificant. The model had difficulty identify accessory arteries, the accuracy was significantly lower than radiologist (44.4% vs. 77.8%, p = 0.044). The model also had lower accuracy in accessory veins, but the difference was insignificant (64.3% vs. 85.7%, p = 0.094). Deep learning models could accurately recognize the right renal artery and vein main trunk, and accuracy was comparable to that of radiologists. Although the current model still had difficulty recognizing small accessory vessels, further training and model optimization would solve these problems.

Deep Learning-accelerated MRI in Body and Chest.

Rajamohan N, Bagga B, Bansal B, Ginocchio L, Gupta A, Chandarana H

pubmed logopapersMay 13 2025
Deep learning reconstruction (DLR) provides an elegant solution for MR acceleration while preserving image quality. This advancement is crucial for body imaging, which is frequently marred by the increased likelihood of motion-related artifacts. Multiple vendor-specific models focusing on T2, T1, and diffusion-weighted imaging have been developed for the abdomen, pelvis, and chest, with the liver and prostate being the most well-studied organ systems. Variational networks with supervised DL models, including data consistency layers and regularizers, are the most common DLR methods. The common theme for all single-center studies on this subject has been noninferior or superior image quality metrics and lesion conspicuity to conventional sequences despite significant acquisition time reduction. DLR also provides a potential for denoising, artifact reduction, increased resolution, and increased signal-noise ratio (SNR) and contrast-to-noise ratio (CNR) that can be balanced with acceleration benefits depending on the imaged organ system. Some specific challenges faced by DLR include slightly reduced lesion detection, cardiac motion-related signal loss, regional SNR variations, and variabilities in ADC measurements as reported in different organ systems. Continued investigations with large-scale multicenter prospective clinical validation of DLR to document generalizability and demonstrate noninferior diagnostic accuracy with histopathologic correlation are the need of the hour. The creation of vendor-neutral solutions, open data sharing, and diversifying training data sets are also critical to strengthening model robustness.

Automatic CTA analysis for blood vessels and aneurysm features extraction in EVAR planning.

Robbi E, Ravanelli D, Allievi S, Raunig I, Bonvini S, Passerini A, Trianni A

pubmed logopapersMay 12 2025
Endovascular Aneurysm Repair (EVAR) is a minimally invasive procedure crucial for treating abdominal aortic aneurysms (AAA), where precise pre-operative planning is essential. Current clinical methods rely on manual measurements, which are time-consuming and prone to errors. Although AI solutions are increasingly being developed to automate aspects of these processes, most existing approaches primarily focus on computing volumes and diameters, falling short of delivering a fully automated pre-operative analysis. This work presents BRAVE (Blood Vessels Recognition and Aneurysms Visualization Enhancement), the first comprehensive AI-driven solution for vascular segmentation and AAA analysis using pre-operative CTA scans. BRAVE offers exhaustive segmentation, identifying both the primary abdominal aorta and secondary vessels, often overlooked by existing methods, providing a complete view of the vascular structure. The pipeline performs advanced volumetric analysis of the aneurysm sac, quantifying thrombotic tissue and calcifications, and automatically identifies the proximal and distal sealing zones, critical for successful EVAR procedures. BRAVE enables fully automated processing, reducing manual intervention and improving clinical workflow efficiency. Trained on a multi-center open-access dataset, it demonstrates generalizability across different CTA protocols and patient populations, ensuring robustness in diverse clinical settings. This solution saves time, ensures precision, and standardizes the process, enhancing vascular surgeons' decision-making.

Deep learning diagnosis of hepatic echinococcosis based on dual-modality plain CT and ultrasound images: a large-scale, multicenter, diagnostic study.

Zhang J, Zhang J, Tang H, Meng Y, Chen X, Chen J, Chen Y

pubmed logopapersMay 12 2025
Given the current limited accuracy of imaging screening for Hepatic Echinococcosis (HCE) in under-resourced areas, the authors developed and validated a Multimodal Imaging system (HEAC) based on plain Computed Tomography (CT) combined with ultrasound for HCE screening in those areas. In this study, we developed a multimodal deep learning diagnostic system by integrating ultrasound and plain CT imaging data to differentiate hepatic echinococcosis, liver cysts, liver abscesses, and healthy liver conditions. We collected a dataset of 8979 cases spanning 18 years from eight hospitals in Xinjiang China, including both retrospective and prospective data. To enhance the robustness and generalization of the diagnostic model, after modeling CT and ultrasound images using EfficientNet3D and EfficientNet-B0, external and prospective tests were conducted, and the model's performance was compared with diagnoses made by experienced physicians. Across internal and external test sets, the fused model of CT and ultrasound consistently outperformed the individual modality models and physician diagnoses. In the prospective test set from the same center, the fusion model achieved an accuracy of 0.816, sensitivity of 0.849, specificity of 0.942, and an AUC of 0.963, significantly exceeding physician performance (accuracy 0.900, sensitivity 0.800, specificity 0.933). The external test sets across seven other centers demonstrated similar results, with the fusion model achieving an overall accuracy of 0.849, sensitivity of 0.859, specificity of 0.942, and AUC of 0.961. The multimodal deep learning diagnostic system that integrates CT and ultrasound significantly increases the diagnosis accuracy of HCE, liver cysts, and liver abscesses. It beats standard single-modal approaches and physician diagnoses by lowering misdiagnosis rates and increasing diagnostic reliability. It emphasizes the promise of multimodal imaging systems in tackling diagnostic issues in low-resource areas, opening the path for improved medical care accessibility and outcomes.

AutoFRS: an externally validated, annotation-free approach to computational preoperative complication risk stratification in pancreatic surgery - an experimental study.

Kolbinger FR, Bhasker N, Schön F, Cser D, Zwanenburg A, Löck S, Hempel S, Schulze A, Skorobohach N, Schmeiser HM, Klotz R, Hoffmann RT, Probst P, Müller B, Bodenstedt S, Wagner M, Weitz J, Kühn JP, Distler M, Speidel S

pubmed logopapersMay 12 2025
The risk of postoperative pancreatic fistula (POPF), one of the most dreaded complications after pancreatic surgery, can be predicted from preoperative imaging and tabular clinical routine data. However, existing studies suffer from limited clinical applicability due to a need for manual data annotation and a lack of external validation. We propose AutoFRS (automated fistula risk score software), an externally validated end-to-end prediction tool for POPF risk stratification based on multimodal preoperative data. We trained AutoFRS on preoperative contrast-enhanced computed tomography imaging and clinical data from 108 patients undergoing pancreatic head resection and validated it on an external cohort of 61 patients. Prediction performance was assessed using the area under the receiver operating characteristic curve (AUC) and balanced accuracy. In addition, model performance was compared to the updated alternative fistula risk score (ua-FRS), the current clinical gold standard method for intraoperative POPF risk stratification. AutoFRS achieved an AUC of 0.81 and a balanced accuracy of 0.72 in internal validation and an AUC of 0.79 and a balanced accuracy of 0.70 in external validation. In a patient subset with documented intraoperative POPF risk factors, AutoFRS (AUC: 0.84 ± 0.05) performed on par with the uaFRS (AUC: 0.85 ± 0.06). The AutoFRS web application facilitates annotation-free prediction of POPF from preoperative imaging and clinical data based on the AutoFRS prediction model. POPF can be predicted from multimodal clinical routine data without human data annotation, automating the risk prediction process. We provide additional evidence of the clinical feasibility of preoperative POPF risk stratification and introduce a software pipeline for future prospective evaluation.

Artificial intelligence-assisted diagnosis of early allograft dysfunction based on ultrasound image and data.

Meng Y, Wang M, Niu N, Zhang H, Yang J, Zhang G, Liu J, Tang Y, Wang K

pubmed logopapersMay 12 2025
Early allograft dysfunction (EAD) significantly affects liver transplantation prognosis. This study evaluated the effectiveness of artificial intelligence (AI)-assisted methods in accurately diagnosing EAD and identifying its causes. The primary metric for assessing the accuracy was the area under the receiver operating characteristic curve (AUC). Accuracy, sensitivity, and specificity were calculated and analyzed to compare the performance of the AI models with each other and with radiologists. EAD classification followed the criteria established by Olthoff et al. A total of 582 liver transplant patients who underwent transplantation between December 2012 and June 2021 were selected. Among these, 117 patients (mean age 33.5 ± 26.5 years, 80 men) were evaluated. The ultrasound parameters, images, and clinical information of patients were extracted from the database to train the AI model. The AUC for the ultrasound-spectrogram fusion network constructed from four ultrasound images and medical data was 0.968 (95%CI: 0.940, 0.991), outperforming radiologists by 30% for all metrics. AI assistance significantly improved diagnostic accuracy, sensitivity, and specificity (P < 0.050) for both experienced and less-experienced physicians. EAD lacks efficient diagnosis and causation analysis methods. The integration of AI and ultrasound enhances diagnostic accuracy and causation analysis. By modeling only images and data related to blood flow, the AI model effectively analyzed patients with EAD caused by abnormal blood supply. Our model can assist radiologists in reducing judgment discrepancies, potentially benefitting patients with EAD in underdeveloped regions. Furthermore, it enables targeted treatment for those with abnormal blood supply.

Accelerating prostate rs-EPI DWI with deep learning: Halving scan time, enhancing image quality, and validating in vivo.

Zhang P, Feng Z, Chen S, Zhu J, Fan C, Xia L, Min X

pubmed logopapersMay 12 2025
This study aims to evaluate the feasibility and effectiveness of deep learning-based super-resolution techniques to reduce scan time while preserving image quality in high-resolution prostate diffusion-weighted imaging (DWI) with readout-segmented echo-planar imaging (rs-EPI). We retrospectively and prospectively analyzed prostate rs-EPI DWI data, employing deep learning super-resolution models, particularly the Multi-Scale Self-Similarity Network (MSSNet), to reconstruct low-resolution images into high-resolution images. Performance metrics such as structural similarity index (SSIM), Peak signal-to-noise ratio (PSNR), and normalized root mean squared error (NRMSE) were used to compare reconstructed images against the high-resolution ground truth (HR<sub>GT</sub>). Additionally, we evaluated the apparent diffusion coefficient (ADC) values and signal-to-noise ratio (SNR) across different models. The MSSNet model demonstrated superior performance in image reconstruction, achieving maximum SSIM values of 0.9798, and significant improvements in PSNR and NRMSE compared to other models. The deep learning approach reduced the rs-EPI DWI scan time by 54.4 % while maintaining image quality comparable to HR<sub>GT</sub>. Pearson correlation analysis revealed a strong correlation between ADC values from deep learning-reconstructed images and the ground truth, with differences remaining within 5 %. Furthermore, all models showed significant SNR enhancement, with MSSNet performing best across most cases. Deep learning-based super-resolution techniques, particularly MSSNet, effectively reduce scan time and enhance image quality in prostate rs-EPI DWI, making them promising tools for clinical applications.

Enhancing noninvasive pancreatic cystic neoplasm diagnosis with multimodal machine learning.

Huang W, Xu Y, Li Z, Li J, Chen Q, Huang Q, Wu Y, Chen H

pubmed logopapersMay 12 2025
Pancreatic cystic neoplasms (PCNs) are a complex group of lesions with a spectrum of malignancy. Accurate differentiation of PCN types is crucial for patient management, as misdiagnosis can result in unnecessary surgeries or treatment delays, affecting the quality of life. The significance of developing a non-invasive, accurate diagnostic model is underscored by the need to improve patient outcomes and reduce the impact of these conditions. We developed a machine learning model capable of accurately identifying different types of PCNs in a non-invasive manner, by using a dataset comprising 449 MRI and 568 CT scans from adult patients, spanning from 2009 to 2022. The study's results indicate that our multimodal machine learning algorithm, which integrates both clinical and imaging data, significantly outperforms single-source data algorithms. Specifically, it demonstrated state-of-the-art performance in classifying PCN types, achieving an average accuracy of 91.2%, precision of 91.7%, sensitivity of 88.9%, and specificity of 96.5%. Remarkably, for patients with mucinous cystic neoplasms (MCNs), regardless of undergoing MRI or CT imaging, the model achieved a 100% prediction accuracy rate. It indicates that our non-invasive multimodal machine learning model offers strong support for the early screening of MCNs, and represents a significant advancement in PCN diagnosis for improving clinical practice and patient outcomes. We also achieved the best results on an additional pancreatic cancer dataset, which further proves the generality of our model.

Two-Stage Automatic Liver Classification System Based on Deep Learning Approach Using CT Images.

Kılıç R, Yalçın A, Alper F, Oral EA, Ozbek IY

pubmed logopapersMay 12 2025
Alveolar echinococcosis (AE) is a parasitic disease caused by Echinococcus multilocularis, where early detection is crucial for effective treatment. This study introduces a novel method for the early diagnosis of liver diseases by differentiating between tumor, AE, and healthy cases using non-contrast CT images, which are widely accessible and eliminate the risks associated with contrast agents. The proposed approach integrates an automatic liver region detection method based on RCNN followed by a CNN-based classification framework. A dataset comprising over 27,000 thorax-abdominal images from 233 patients, including 8206 images with liver tissue, was constructed and used to evaluate the proposed method. The experimental results demonstrate the importance of the two-stage classification approach. In a 2-class classification problem for healthy and non-healthy classes, an accuracy rate of 0.936 (95% CI: 0.925 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>-</mo></math> 0.947) was obtained, and that for 3-class classification problem with AE, tumor, and healthy classes was obtained as 0.863 (95% CI: 0.847 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>-</mo></math> 0.879). These results highlight the potential use of the proposed framework as a fully automatic approach for liver classification without the use of contrast agents. Furthermore, the proposed framework demonstrates competitive performance compared to other state-of-the-art techniques, suggesting its applicability in clinical practice.
Page 70 of 73727 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.