Sort by:
Page 28 of 30300 results

Accelerating prostate rs-EPI DWI with deep learning: Halving scan time, enhancing image quality, and validating in vivo.

Zhang P, Feng Z, Chen S, Zhu J, Fan C, Xia L, Min X

pubmed logopapersMay 12 2025
This study aims to evaluate the feasibility and effectiveness of deep learning-based super-resolution techniques to reduce scan time while preserving image quality in high-resolution prostate diffusion-weighted imaging (DWI) with readout-segmented echo-planar imaging (rs-EPI). We retrospectively and prospectively analyzed prostate rs-EPI DWI data, employing deep learning super-resolution models, particularly the Multi-Scale Self-Similarity Network (MSSNet), to reconstruct low-resolution images into high-resolution images. Performance metrics such as structural similarity index (SSIM), Peak signal-to-noise ratio (PSNR), and normalized root mean squared error (NRMSE) were used to compare reconstructed images against the high-resolution ground truth (HR<sub>GT</sub>). Additionally, we evaluated the apparent diffusion coefficient (ADC) values and signal-to-noise ratio (SNR) across different models. The MSSNet model demonstrated superior performance in image reconstruction, achieving maximum SSIM values of 0.9798, and significant improvements in PSNR and NRMSE compared to other models. The deep learning approach reduced the rs-EPI DWI scan time by 54.4 % while maintaining image quality comparable to HR<sub>GT</sub>. Pearson correlation analysis revealed a strong correlation between ADC values from deep learning-reconstructed images and the ground truth, with differences remaining within 5 %. Furthermore, all models showed significant SNR enhancement, with MSSNet performing best across most cases. Deep learning-based super-resolution techniques, particularly MSSNet, effectively reduce scan time and enhance image quality in prostate rs-EPI DWI, making them promising tools for clinical applications.

Inference-specific learning for improved medical image segmentation.

Chen Y, Liu S, Li M, Han B, Xing L

pubmed logopapersMay 12 2025
Deep learning networks map input data to output predictions by fitting network parameters using training data. However, applying a trained network to new, unseen inference data resembles an interpolation process, which may lead to inaccurate predictions if the training and inference data distributions differ significantly. This study aims to generally improve the prediction accuracy of deep learning networks on the inference case by bridging the gap between training and inference data. We propose an inference-specific learning strategy to enhance the network learning process without modifying the network structure. By aligning training data to closely match the specific inference data, we generate an inference-specific training dataset, enhancing the network optimization around the inference data point for more accurate predictions. Taking medical image auto-segmentation as an example, we develop an inference-specific auto-segmentation framework consisting of initial segmentation learning, inference-specific training data deformation, and inference-specific segmentation refinement. The framework is evaluated on public abdominal, head-neck, and pancreas CT datasets comprising 30, 42, and 210 cases, respectively, for medical image segmentation. Experimental results show that our method improves the organ-averaged mean Dice by 6.2% (p-value = 0.001), 1.5% (p-value = 0.003), and 3.7% (p-value < 0.001) on the three datasets, respectively, with a more notable increase for difficult-to-segment organs (such as a 21.7% increase for the gallbladder [p-value = 0.004]). By incorporating organ mask-based weak supervision into the training data alignment learning, the inference-specific auto-segmentation accuracy is generally improved compared with the image intensity-based alignment. Besides, a moving-averaged calculation of the inference organ mask during the learning process strengthens both the robustness and accuracy of the final inference segmentation. By leveraging inference data during training, the proposed inference-specific learning strategy consistently improves auto-segmentation accuracy and holds the potential to be broadly applied for enhanced deep learning decision-making.

Effect of Deep Learning-Based Image Reconstruction on Lesion Conspicuity of Liver Metastases in Pre- and Post-contrast Enhanced Computed Tomography.

Ichikawa Y, Hasegawa D, Domae K, Nagata M, Sakuma H

pubmed logopapersMay 12 2025
The purpose of this study was to investigate the utility of deep learning image reconstruction at medium and high intensity levels (DLIR-M and DLIR-H, respectively) for better delineation of liver metastases in pre-contrast and post-contrast CT, compared to conventional hybrid iterative reconstruction (IR) methods. Forty-one patients with liver metastases who underwent abdominal CT were studied. The raw data were reconstructed with three different algorithms: hybrid IR (ASiR-V 50%), DLIR-M (TrueFildelity-M), and DLIR-H (TrueFildelity-H). Three experienced radiologists independently rated the lesion conspicuity of liver metastases on a qualitative 5-point scale (score 1 = very poor; score 5 = excellent). The observers also selected each image series for pre- and post-contrast CT per patient that was considered most preferable for liver metastases assessment. For pre-contrast CT, lesion conspicuity scores for DLIR-H and DLIR-M were significantly higher than those for hybrid IR for two of the three observers, while there was no significant difference for one observer. For post-contrast CT, the lesion conspicuity scores for DLIR-H images were significantly higher than those for DLIR-M images for two of the three observers on post-contrast CT (Observer 1: DLIR-H, 4.3 ± 0.8 vs. DLIR-M, 3.9 ± 0.9, p = 0.0006; Observer 3: DLIR-H, 4.6 ± 0.6 vs. DLIR-M, 4.3 ± 0.6, p = 0.0013). For post-contrast CT, all observers most often selected DLIR-H as the best reconstruction method for the diagnosis of liver metastases. However, in the pre-contrast CT, there was variation among the three observers in determining the most preferred image reconstruction method, and DLIR was not necessarily preferred over hybrid IR for the diagnosis of liver metastases.

Two-Stage Automatic Liver Classification System Based on Deep Learning Approach Using CT Images.

Kılıç R, Yalçın A, Alper F, Oral EA, Ozbek IY

pubmed logopapersMay 12 2025
Alveolar echinococcosis (AE) is a parasitic disease caused by Echinococcus multilocularis, where early detection is crucial for effective treatment. This study introduces a novel method for the early diagnosis of liver diseases by differentiating between tumor, AE, and healthy cases using non-contrast CT images, which are widely accessible and eliminate the risks associated with contrast agents. The proposed approach integrates an automatic liver region detection method based on RCNN followed by a CNN-based classification framework. A dataset comprising over 27,000 thorax-abdominal images from 233 patients, including 8206 images with liver tissue, was constructed and used to evaluate the proposed method. The experimental results demonstrate the importance of the two-stage classification approach. In a 2-class classification problem for healthy and non-healthy classes, an accuracy rate of 0.936 (95% CI: 0.925 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>-</mo></math> 0.947) was obtained, and that for 3-class classification problem with AE, tumor, and healthy classes was obtained as 0.863 (95% CI: 0.847 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>-</mo></math> 0.879). These results highlight the potential use of the proposed framework as a fully automatic approach for liver classification without the use of contrast agents. Furthermore, the proposed framework demonstrates competitive performance compared to other state-of-the-art techniques, suggesting its applicability in clinical practice.

Automatic CTA analysis for blood vessels and aneurysm features extraction in EVAR planning.

Robbi E, Ravanelli D, Allievi S, Raunig I, Bonvini S, Passerini A, Trianni A

pubmed logopapersMay 12 2025
Endovascular Aneurysm Repair (EVAR) is a minimally invasive procedure crucial for treating abdominal aortic aneurysms (AAA), where precise pre-operative planning is essential. Current clinical methods rely on manual measurements, which are time-consuming and prone to errors. Although AI solutions are increasingly being developed to automate aspects of these processes, most existing approaches primarily focus on computing volumes and diameters, falling short of delivering a fully automated pre-operative analysis. This work presents BRAVE (Blood Vessels Recognition and Aneurysms Visualization Enhancement), the first comprehensive AI-driven solution for vascular segmentation and AAA analysis using pre-operative CTA scans. BRAVE offers exhaustive segmentation, identifying both the primary abdominal aorta and secondary vessels, often overlooked by existing methods, providing a complete view of the vascular structure. The pipeline performs advanced volumetric analysis of the aneurysm sac, quantifying thrombotic tissue and calcifications, and automatically identifies the proximal and distal sealing zones, critical for successful EVAR procedures. BRAVE enables fully automated processing, reducing manual intervention and improving clinical workflow efficiency. Trained on a multi-center open-access dataset, it demonstrates generalizability across different CTA protocols and patient populations, ensuring robustness in diverse clinical settings. This solution saves time, ensures precision, and standardizes the process, enhancing vascular surgeons' decision-making.

Enhancing noninvasive pancreatic cystic neoplasm diagnosis with multimodal machine learning.

Huang W, Xu Y, Li Z, Li J, Chen Q, Huang Q, Wu Y, Chen H

pubmed logopapersMay 12 2025
Pancreatic cystic neoplasms (PCNs) are a complex group of lesions with a spectrum of malignancy. Accurate differentiation of PCN types is crucial for patient management, as misdiagnosis can result in unnecessary surgeries or treatment delays, affecting the quality of life. The significance of developing a non-invasive, accurate diagnostic model is underscored by the need to improve patient outcomes and reduce the impact of these conditions. We developed a machine learning model capable of accurately identifying different types of PCNs in a non-invasive manner, by using a dataset comprising 449 MRI and 568 CT scans from adult patients, spanning from 2009 to 2022. The study's results indicate that our multimodal machine learning algorithm, which integrates both clinical and imaging data, significantly outperforms single-source data algorithms. Specifically, it demonstrated state-of-the-art performance in classifying PCN types, achieving an average accuracy of 91.2%, precision of 91.7%, sensitivity of 88.9%, and specificity of 96.5%. Remarkably, for patients with mucinous cystic neoplasms (MCNs), regardless of undergoing MRI or CT imaging, the model achieved a 100% prediction accuracy rate. It indicates that our non-invasive multimodal machine learning model offers strong support for the early screening of MCNs, and represents a significant advancement in PCN diagnosis for improving clinical practice and patient outcomes. We also achieved the best results on an additional pancreatic cancer dataset, which further proves the generality of our model.

Artificial intelligence-assisted diagnosis of early allograft dysfunction based on ultrasound image and data.

Meng Y, Wang M, Niu N, Zhang H, Yang J, Zhang G, Liu J, Tang Y, Wang K

pubmed logopapersMay 12 2025
Early allograft dysfunction (EAD) significantly affects liver transplantation prognosis. This study evaluated the effectiveness of artificial intelligence (AI)-assisted methods in accurately diagnosing EAD and identifying its causes. The primary metric for assessing the accuracy was the area under the receiver operating characteristic curve (AUC). Accuracy, sensitivity, and specificity were calculated and analyzed to compare the performance of the AI models with each other and with radiologists. EAD classification followed the criteria established by Olthoff et al. A total of 582 liver transplant patients who underwent transplantation between December 2012 and June 2021 were selected. Among these, 117 patients (mean age 33.5 ± 26.5 years, 80 men) were evaluated. The ultrasound parameters, images, and clinical information of patients were extracted from the database to train the AI model. The AUC for the ultrasound-spectrogram fusion network constructed from four ultrasound images and medical data was 0.968 (95%CI: 0.940, 0.991), outperforming radiologists by 30% for all metrics. AI assistance significantly improved diagnostic accuracy, sensitivity, and specificity (P < 0.050) for both experienced and less-experienced physicians. EAD lacks efficient diagnosis and causation analysis methods. The integration of AI and ultrasound enhances diagnostic accuracy and causation analysis. By modeling only images and data related to blood flow, the AI model effectively analyzed patients with EAD caused by abnormal blood supply. Our model can assist radiologists in reducing judgment discrepancies, potentially benefitting patients with EAD in underdeveloped regions. Furthermore, it enables targeted treatment for those with abnormal blood supply.

AutoFRS: an externally validated, annotation-free approach to computational preoperative complication risk stratification in pancreatic surgery - an experimental study.

Kolbinger FR, Bhasker N, Schön F, Cser D, Zwanenburg A, Löck S, Hempel S, Schulze A, Skorobohach N, Schmeiser HM, Klotz R, Hoffmann RT, Probst P, Müller B, Bodenstedt S, Wagner M, Weitz J, Kühn JP, Distler M, Speidel S

pubmed logopapersMay 12 2025
The risk of postoperative pancreatic fistula (POPF), one of the most dreaded complications after pancreatic surgery, can be predicted from preoperative imaging and tabular clinical routine data. However, existing studies suffer from limited clinical applicability due to a need for manual data annotation and a lack of external validation. We propose AutoFRS (automated fistula risk score software), an externally validated end-to-end prediction tool for POPF risk stratification based on multimodal preoperative data. We trained AutoFRS on preoperative contrast-enhanced computed tomography imaging and clinical data from 108 patients undergoing pancreatic head resection and validated it on an external cohort of 61 patients. Prediction performance was assessed using the area under the receiver operating characteristic curve (AUC) and balanced accuracy. In addition, model performance was compared to the updated alternative fistula risk score (ua-FRS), the current clinical gold standard method for intraoperative POPF risk stratification. AutoFRS achieved an AUC of 0.81 and a balanced accuracy of 0.72 in internal validation and an AUC of 0.79 and a balanced accuracy of 0.70 in external validation. In a patient subset with documented intraoperative POPF risk factors, AutoFRS (AUC: 0.84 ± 0.05) performed on par with the uaFRS (AUC: 0.85 ± 0.06). The AutoFRS web application facilitates annotation-free prediction of POPF from preoperative imaging and clinical data based on the AutoFRS prediction model. POPF can be predicted from multimodal clinical routine data without human data annotation, automating the risk prediction process. We provide additional evidence of the clinical feasibility of preoperative POPF risk stratification and introduce a software pipeline for future prospective evaluation.

Preoperative radiomics models using CT and MRI for microsatellite instability in colorectal cancer: a systematic review and meta-analysis.

Capello Ingold G, Martins da Fonseca J, Kolenda Zloić S, Verdan Moreira S, Kago Marole K, Finnegan E, Yoshikawa MH, Daugėlaitė S, Souza E Silva TX, Soato Ratti MA

pubmed logopapersMay 10 2025
Microsatellite instability (MSI) is a novel predictive biomarker for chemotherapy and immunotherapy response, as well as prognostic indicator in colorectal cancer (CRC). The current standard for MSI identification is polymerase chain reaction (PCR) testing or the immunohistochemical analysis of tumor biopsy samples. However, tumor heterogeneity and procedure complications pose challenges to these techniques. CT and MRI-based radiomics models offer a promising non-invasive approach for this purpose. A systematic search of PubMed, Embase, Cochrane Library and Scopus was conducted to identify studies evaluating the diagnostic performance of CT and MRI-based radiomics models for detecting MSI status in CRC. Pooled area under the curve (AUC), sensitivity, and specificity were calculated in RStudio using a random-effects model. Forest plots and a summary ROC curve were generated. Heterogeneity was assessed using I² statistics and explored through sensitivity analyses, threshold effect assessment, subgroup analyses and meta-regression. 17 studies with a total of 6,045 subjects were included in the analysis. All studies extracted radiomic features from CT or MRI images of CRC patients with confirmed MSI status to train machine learning models. The pooled AUC was 0.815 (95% CI: 0.784-0.840) for CT-based studies and 0.900 (95% CI: 0.819-0.943) for MRI-based studies. Significant heterogeneity was identified and addressed through extensive analysis. Radiomics models represent a novel and promising tool for predicting MSI status in CRC patients. These findings may serve as a foundation for future studies aimed at developing and validating improved models, ultimately enhancing the diagnosis, treatment, and prognosis of colorectal cancer.

Radiomics prediction of surgery in ulcerative colitis refractory to medical treatment.

Sakamoto K, Okabayashi K, Seishima R, Shigeta K, Kiyohara H, Mikami Y, Kanai T, Kitagawa Y

pubmed logopapersMay 10 2025
The surgeries in drug-resistant ulcerative colitis are determined by complex factors. This study evaluated the predictive performance of radiomics analysis on the basis of whether patients with ulcerative colitis in hospital were in the surgical or medical treatment group by discharge from hospital. This single-center retrospective cohort study used CT at admission of patients with US admitted from 2015 to 2022. The target of prediction was whether the patient would undergo surgery by the time of discharge. Radiomics features were extracted using the rectal wall at the level of the tailbone tip of the CT as the region of interest. CT data were randomly classified into a training cohort and a validation cohort, and LASSO regression was performed using the training cohort to create a formula for calculating the radiomics score. A total of 147 patients were selected, and data from 184 CT scans were collected. Data from 157 CT scans matched the selection criteria and were included. Five features were used for the radiomics score. Univariate logistic regression analysis of clinical information detected a significant influence of severity (p < 0.001), number of drugs used until surgery (p < 0.001), Lichtiger score (p = 0.024), and hemoglobin (p = 0.010). Using a nomogram combining these items, we found that the discriminatory power in the surgery and medical treatment groups was AUC 0.822 (95% confidence interval (CI) 0.841-0.951) for the training cohort and AUC 0.868 (95% CI 0.729-1.000) for the validation cohort, indicating a good ability to discriminate the outcomes. Radiomics analysis of CT images of patients with US at the time of admission, combined with clinical data, showed high predictive ability regarding a treatment strategy of surgery or medical treatment.
Page 28 of 30300 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.