Sort by:
Page 89 of 99990 results

ToPoMesh: accurate 3D surface reconstruction from CT volumetric data via topology modification.

Chen J, Zhu Q, Xie B, Li T

pubmed logopapersMay 27 2025
Traditional computed tomography (CT) methods for 3D reconstruction face resolution limitations and require time-consuming post-processing workflows. While deep learning techniques improve the accuracy of segmentation, traditional voxel-based segmentation and surface reconstruction pipelines tend to introduce artifacts such as disconnected regions, topological inconsistencies, and stepped distortions. To overcome these challenges, we propose ToPoMesh, an end-to-end 3D mesh reconstruction deep learning framework for direct reconstruction of high-fidelity surface meshes from CT volume data. To address the existing problems, our approach introduces three core innovations: (1) accurate local and global shape modeling by preserving and enhancing local feature information through residual connectivity and self-attention mechanisms in graph convolutional networks; (2) an adaptive variant density (Avd) mesh de-pooling strategy, which dynamically optimizes the vertex distribution; (3) a topology modification module that iteratively prunes the error surfaces and boundary smoothing via variable regularity terms to obtain finer mesh surfaces. Experiments on the LiTS, MSD pancreas tumor, MSD hippocampus, and MSD spleen datasets demonstrate that ToPoMesh outperforms state-of-the-art methods. Quantitative evaluations demonstrate a 57.4% reduction in Chamfer distance (liver) and a 0.47% improvement in F-score compared to end-to-end 3D reconstruction methods, while qualitative results confirm enhanced fidelity for thin structures and complex anatomical topologies versus segmentation frameworks. Importantly, our method eliminates the need for manual post-processing, realizes the ability to reconstruct 3D meshes from images, and can provide precise guidance for surgical planning and diagnosis.

PlaNet-S: an Automatic Semantic Segmentation Model for Placenta Using U-Net and SegNeXt.

Saito I, Yamamoto S, Takaya E, Harigai A, Sato T, Kobayashi T, Takase K, Ueda T

pubmed logopapersMay 27 2025
This study aimed to develop a fully automated semantic placenta segmentation model that integrates the U-Net and SegNeXt architectures through ensemble learning. A total of 218 pregnant women with suspected placental abnormalities who underwent magnetic resonance imaging (MRI) were enrolled, yielding 1090 annotated images for developing a deep learning model for placental segmentation. The images were standardized and divided into training and test sets. The performance of Placental Segmentation Network (PlaNet-S), which integrates U-Net and SegNeXt within an ensemble framework, was assessed using Intersection over Union (IoU) and counting connected components (CCC) against the U-Net, U-Net + + , and DS-transUNet. PlaNet-S had significantly higher IoU (0.78, SD = 0.10) than that of U-Net (0.73, SD = 0.13) (p < 0.005) and DS-transUNet (0.64, SD = 0.16) (p < 0.005), while the difference with U-Net + + (0.77, SD = 0.12) was not statistically significant. The CCC for PlaNet-S was significantly higher than that for U-Net (p < 0.005), U-Net + + (p < 0.005), and DS-transUNet (p < 0.005), matching the ground truth in 86.0%, 56.7%, 67.9%, and 20.9% of the cases, respectively. PlaNet-S achieved higher IoU than U-Net and DS-transUNet, and comparable IoU to U-Net + + . Moreover, PlaNet-S significantly outperformed all three models in CCC, indicating better agreement with the ground truth. This model addresses the challenges of time-consuming physician-assisted manual segmentation and offers the potential for diverse applications in placental imaging analyses.

Automated Body Composition Analysis Using DAFS Express on 2D MRI Slices at L3 Vertebral Level.

Akella V, Bagherinasab R, Lee H, Li JM, Nguyen L, Salehin M, Chow VTY, Popuri K, Beg MF

pubmed logopapersMay 27 2025
Body composition analysis is vital in assessing health conditions such as obesity, sarcopenia, and metabolic syndromes. MRI provides detailed images of skeletal muscle (SM), visceral adipose tissue (VAT), and subcutaneous adipose tissue (SAT), but their manual segmentation is labor-intensive and limits clinical applicability. This study validates an automated tool for MRI-based 2D body composition analysis (Data Analysis Facilitation Suite (DAFS) Express), comparing its automated measurements with expert manual segmentations using UK Biobank data. A cohort of 399 participants from the UK Biobank dataset was selected, yielding 423 single L3 slices for analysis. DAFS Express performed automated segmentations of SM, VAT, and SAT, which were then manually corrected by expert raters for validation. Evaluation metrics included Jaccard coefficients, Dice scores, intraclass correlation coefficients (ICCs), and Bland-Altman Plots to assess segmentation agreement and reliability. High agreements were observed between automated and manual segmentations with mean Jaccard scores: SM 99.03%, VAT 95.25%, and SAT 99.57%, and mean Dice scores: SM 99.51%, VAT 97.41%, and SAT 99.78%. Cross-sectional area comparisons showed consistent measurements, with automated methods closely matching manual measurements for SM and SAT, and slightly higher values for VAT (SM: auto 132.51 cm<sup>2</sup>, manual 132.36 cm<sup>2</sup>; VAT: auto 137.07 cm<sup>2</sup>, manual 134.46 cm<sup>2</sup>; SAT: auto 203.39 cm<sup>2</sup>, manual 202.85 cm<sup>2</sup>). ICCs confirmed strong reliability (SM 0.998, VAT 0.994, SAT 0.994). Bland-Altman plots revealed minimal biases, and boxplots illustrated distribution similarities across SM, VAT, and SAT areas. On average, DAFS Express took 18 s per DICOM for a total of 126.9 min for 423 images to output segmentations and measurement PDF's per DICOM. Automated segmentation of SM, VAT, and SAT from 2D MRI images using DAFS Express showed comparable accuracy to manual segmentation. This underscores its potential to streamline image analysis processes in research and clinical settings, enhancing diagnostic accuracy and efficiency. Future work should focus on further validation across diverse clinical applications and imaging conditions.

Differentiating Benign and Hepatocellular Carcinoma Cirrhotic Nodules: Radiomics Analysis of Water Restriction Patterns with Diffusion MRI.

Arian A, Fotouhi M, Samadi Khoshe Mehr F, Setayeshpour B, Delazar S, Nahvijou A, Nasiri-Toosi M

pubmed logopapersMay 26 2025
Current study aimed to investigate radiomics features derived from two-center diffusion-MRI to differentiate benign and hepatocellular carcinoma (HCC) liver nodules. A total of 328 patients with 517 LI-RADS 2-5 nodules were included. MR images were retrospectively collected from 3 T and 1.5 T MRI vendors. Lesions were categorized into 242 benign and 275 HCC based on follow-up imaging for LR-2,3 and pathology results for LR4,5 nodules, and randomly divided into training (80%) and test (20%) sets. Preprocessing included resampling and normalization. Radiomics features were extracted from lesion volume-of-interest (VOI) on diffusion Images. Scanner variability was corrected using ComBat harmonization method followed by High-correlation filter, PCA filter, and LASSO to select important features. Best classifier model was selected by 10-fold cross-validation, and accuracy was assessed on the test dataset. 1,434 features were extracted, and subsequent classifiers were constructed based on the 16 most important selected features. Notably, support-vector machine (SVM) demonstrated better performance in the test dataset in distinguishing between benign and HCC nodules, achieving an accuracy of 0.92, sensitivity of 0.94, and specificity of 0.86. Utilizing diffusion-MRI radiomics, our study highlights the performance of SVM, trained on lesions' diffusivity characteristics, in distinguishing benign and HCC nodules, ensuring clinical potential. It is suggested that further evaluations be conducted on multi-center datasets to address harmonization challenges. Integration of diffusion radiomics, for monitoring water restriction patterns as tumor histopathological index, with machine learning models demonstrates potential for achieving a reliable noninvasive method to improve the current diagnosis criteria.

The extent of Skeletal muscle wasting in prolonged critical illness and its association with survival: insights from a retrospective single-center study.

Kolck J, Hosse C, Fehrenbach U, Beetz NL, Auer TA, Pille C, Geisel D

pubmed logopapersMay 26 2025
Muscle wasting in critically ill patients, particularly those with prolonged hospitalization, poses a significant challenge to recovery and long-term outcomes. The aim of this study was to characterize long-term muscle wasting trajectories in ICU patients with acute respiratory distress syndrome (ARDS) due to COVID-19 and acute pancreatitis (AP), to evaluate correlations between muscle wasting and patient outcomes, and to identify clinically feasible thresholds that have the potential to enhance patient care strategies. A collective of 154 ICU patients (100 AP and 54 COVID-19 ARDS) with a minimum ICU stay of 10 days and at least three abdominal CT scans were retrospectively analyzed. AI-driven segmentation of CT scans quantified changes in psoas muscle area (PMA). A mixed model analysis was used to assess the correlation between mortality and muscle wasting, Cox regression was applied to identify potential predictors of survival. Muscle loss rates, survival thresholds and outcome correlations were assessed using Kaplan-Meier and receiver operating characteristic (ROC) analyses. Muscle loss in ICU patients was most pronounced in the first two weeks, peaking at -2.42% and - 2.39% psoas muscle area (PMA) loss per day in weeks 1 and 2, respectively, followed by a progressive decline. The median total PMA loss was 48.3%, with significantly greater losses in non-survivors. Mixed model analysis confirmed correlation of muscle wasting with mortality. Cox regression identified visceral adipose tissue (VAT), sequential organ failure assessment (SOFA) score and muscle wasting as significant risk factors, while increased skeletal muscle area (SMA) was protective. ROC and Kaplan-Meier analyses showed strong correlations between PMA loss thresholds and survival, with daily loss > 4% predicting the worst survival (39.7%). To our knowledge, This is the first study to highlight the substantial progression of muscle wasting in prolonged hospitalized ICU patients. The mortality-related thresholds for muscle wasting rates identified in this study may provide a basis for clinical risk stratification. Future research should validate these findings in larger cohorts and explore strategies to mitigate muscle loss. Not applicable.

Machine-learning modeL based on computed tomography body composition analysis for the estimation of resting energy expenditure: A pilot study.

Palmas F, Ciudin A, Melian J, Guerra R, Zabalegui A, Cárdenas G, Mucarzel F, Rodriguez A, Roson N, Burgos R, Hernández C, Simó R

pubmed logopapersMay 26 2025
The assessment of resting energy expenditure (REE) is a challenging task with the current existing methods. The reference method, indirect calorimetry (IC), is not widely available, and other surrogates, such as equations and bioimpedance (BIA) show poor agreement with IC. Body composition (BC), in particular muscle mass, plays an important role in REE. In recent years, computed tomography (CT) has emerged as a reliable tool for BC assessment, but its usefulness for the REE evaluation has not been examined. In the present study we have explored the usefulness of CT-scan imaging to assess the REE using AI machine-learning models. Single-centre observational cross-sectional pilot study from January to June 2022, including 90 fasting, clinically stable adults (≥18 years) with no contraindications for indirect calorimetry (IC), bioimpedance (BIA), or abdominal CT-scan. REE was measured using classical predictive equations, IC, BIA and skeletal CT-scan. The proposed model was based on a second-order linear regression with different input parameters, and the output corresponds to the estimated REE. The model was trained and tested using a cross-validation one-vs-all strategy including subjects with different characteristics. Data from 90 subjects were included in the final analysis. Bland-Altman plots showed that the CT-based estimation model had a mean bias of 0 kcal/day (LoA: -508.4 to 508.4) compared with IC, indicating better agreement than most predictive equations and similar agreement to BIA (bias 53.4 kcal/day, LoA: -475.7 to 582.4). Surprisingly, gender and BMI, ones of the mains variables included in all the BIA algorithms and mathematical equations were not relevant variables for REE calculated by means of AI coupled to skeletal CT scan. These findings were consistent with the results of other performance metrics, including mean absolute error (MAE), root mean square error (RMSE), and Lin's concordance correlation coefficient (CCC), which also favored the CT-based method over conventional equations. Our results suggest that the analysis of a CT-scan image by means of machine learning model is a reliable tool for the REE estimation. These findings have the potential to significantly change the paradigm and guidelines for nutritional assessment.

Applications of artificial intelligence in abdominal imaging.

Gupta A, Rajamohan N, Bansal B, Chaudhri S, Chandarana H, Bagga B

pubmed logopapersMay 26 2025
The rapid advancements in artificial intelligence (AI) carry the promise to reshape abdominal imaging by offering transformative solutions to challenges in disease detection, classification, and personalized care. AI applications, particularly those leveraging deep learning and radiomics, have demonstrated remarkable accuracy in detecting a wide range of abdominal conditions, including but not limited to diffuse liver parenchymal disease, focal liver lesions, pancreatic ductal adenocarcinoma (PDAC), renal tumors, and bowel pathologies. These models excel in the automation of tasks such as segmentation, classification, and prognostication across modalities like ultrasound, CT, and MRI, often surpassing traditional diagnostic methods. Despite these advancements, widespread adoption remains limited by challenges such as data heterogeneity, lack of multicenter validation, reliance on retrospective single-center studies, and the "black box" nature of many AI models, which hinder interpretability and clinician trust. The absence of standardized imaging protocols and reference gold standards further complicates integration into clinical workflows. To address these barriers, future directions emphasize collaborative multi-center efforts to generate diverse, standardized datasets, integration of explainable AI frameworks to existing picture archiving and communication systems, and the development of automated, end-to-end pipelines capable of processing multi-source data. Targeted clinical applications, such as early detection of PDAC, improved segmentation of renal tumors, and improved risk stratification in liver diseases, show potential to refine diagnostic accuracy and therapeutic planning. Ethical considerations, such as data privacy, regulatory compliance, and interdisciplinary collaboration, are essential for successful translation into clinical practice. AI's transformative potential in abdominal imaging lies not only in complementing radiologists but also in fostering precision medicine by enabling faster, more accurate, and patient-centered care. Overcoming current limitations through innovation and collaboration will be pivotal in realizing AI's full potential to improve patient outcomes and redefine the landscape of abdominal radiology.

Research-based clinical deployment of artificial intelligence algorithm for prostate MRI.

Harmon SA, Tetreault J, Esengur OT, Qin M, Yilmaz EC, Chang V, Yang D, Xu Z, Cohen G, Plum J, Sherif T, Levin R, Schmidt-Richberg A, Thompson S, Coons S, Chen T, Choyke PL, Xu D, Gurram S, Wood BJ, Pinto PA, Turkbey B

pubmed logopapersMay 26 2025
A critical limitation to deployment and utilization of Artificial Intelligence (AI) algorithms in radiology practice is the actual integration of algorithms directly into the clinical Picture Archiving and Communications Systems (PACS). Here, we sought to integrate an AI-based pipeline for prostate organ and intraprostatic lesion segmentation within a clinical PACS environment to enable point-of-care utilization under a prospective clinical trial scenario. A previously trained, publicly available AI model for segmentation of intra-prostatic findings on multiparametric Magnetic Resonance Imaging (mpMRI) was converted into a containerized environment compatible with MONAI Deploy Express. An inference server and dedicated clinical PACS workflow were established within our institution for evaluation of real-time use of the AI algorithm. PACS-based deployment was prospectively evaluated in two phases: first, a consecutive cohort of patients undergoing diagnostic imaging at our institution and second, a consecutive cohort of patients undergoing biopsy based on mpMRI findings. The AI pipeline was executed from within the PACS environment by the radiologist. AI findings were imported into clinical biopsy planning software for target definition. Metrics analyzing deployment success, timing, and detection performance were recorded and summarized. In phase one, clinical PACS deployment was successfully executed in 57/58 cases and were obtained within one minute of activation (median 33 s [range 21-50 s]). Comparison with expert radiologist annotation demonstrated stable model performance compared to independent validation studies. In phase 2, 40/40 cases were successfully executed via PACS deployment and results were imported for biopsy targeting. Cancer detection rates for prostate cancer were 82.1% for ROI targets detected by both AI and radiologist, 47.8% in targets proposed by AI and accepted by radiologist, and 33.3% in targets identified by the radiologist alone. Integration of novel AI algorithms requiring multi-parametric input into clinical PACS environment is feasible and model outputs can be used for downstream clinical tasks.

Fetal origins of adult disease: transforming prenatal care by integrating Barker's Hypothesis with AI-driven 4D ultrasound.

Andonotopo W, Bachnas MA, Akbar MIA, Aziz MA, Dewantiningrum J, Pramono MBA, Sulistyowati S, Stanojevic M, Kurjak A

pubmed logopapersMay 26 2025
The fetal origins of adult disease, widely known as Barker's Hypothesis, suggest that adverse fetal environments significantly impact the risk of developing chronic diseases, such as diabetes and cardiovascular conditions, in adulthood. Recent advancements in 4D ultrasound (4D US) and artificial intelligence (AI) technologies offer a promising avenue for improving prenatal diagnostics and validating this hypothesis. These innovations provide detailed insights into fetal behavior and neurodevelopment, linking early developmental markers to long-term health outcomes. This study synthesizes contemporary developments in AI-enhanced 4D US, focusing on their roles in detecting fetal anomalies, assessing neurodevelopmental markers, and evaluating congenital heart defects. The integration of AI with 4D US allows for real-time, high-resolution visualization of fetal anatomy and behavior, surpassing the diagnostic precision of traditional methods. Despite these advancements, challenges such as algorithmic bias, data diversity, and real-world validation persist and require further exploration. Findings demonstrate that AI-driven 4D US improves diagnostic sensitivity and accuracy, enabling earlier detection of fetal abnormalities and optimization of clinical workflows. By providing a more comprehensive understanding of fetal programming, these technologies substantiate the links between early-life conditions and adult health outcomes, as proposed by Barker's Hypothesis. The integration of AI and 4D US has the potential to revolutionize prenatal care, paving the way for personalized maternal-fetal healthcare. Future research should focus on addressing current limitations, including ethical concerns and accessibility challenges, to promote equitable implementation. Such advancements could significantly reduce the global burden of chronic diseases and foster healthier generations.

Deep learning model for malignancy prediction of TI-RADS 4 thyroid nodules with high-risk characteristics using multimodal ultrasound: A multicentre study.

Chu X, Wang T, Chen M, Li J, Wang L, Wang C, Wang H, Wong ST, Chen Y, Li H

pubmed logopapersMay 26 2025
The automatic screening of thyroid nodules using computer-aided diagnosis holds great promise in reducing missed and misdiagnosed cases in clinical practice. However, most current research focuses on single-modal images and does not fully leverage the comprehensive information from multimodal medical images, limiting model performance. To enhance screening accuracy, this study uses a deep learning framework that integrates high-dimensional convolutions of B-mode ultrasound (BMUS) and strain elastography (SE) images to predict the malignancy of TI-RADS 4 thyroid nodules with high-risk features. First, we extract nodule regions from the images and expand the boundary areas. Then, adaptive particle swarm optimization (APSO) and contrast limited adaptive histogram equalization (CLAHE) algorithms are applied to enhance ultrasound image contrast. Finally, deep learning techniques are used to extract and fuse high-dimensional features from both ultrasound modalities to classify benign and malignant thyroid nodules. The proposed model achieved an AUC of 0.937 (95 % CI 0.917-0.949) and 0.927 (95 % CI 0.907-0.948) in the test and external validation sets, respectively, demonstrating strong generalization ability. When compared with the diagnostic performance of three groups of radiologists, the model outperformed them significantly. Meanwhile, with the model's assistance, all three radiologist groups showed improved diagnostic performance. Furthermore, heatmaps generated by the model show a high alignment with radiologists' expertise, further confirming its credibility. The results indicate that our model can assist in clinical thyroid nodule diagnosis, reducing the risk of missed and misdiagnosed diagnoses, particularly for high-risk populations, and holds significant clinical value.
Page 89 of 99990 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.