Sort by:
Page 41 of 58578 results

A novel MRI-based deep learning imaging biomarker for comprehensive assessment of the lenticulostriate artery-neural complex.

Song Y, Jin Y, Wei J, Wang J, Zheng Z, Wang Y, Zeng R, Lu W, Huang B

pubmed logopapersMay 26 2025
To develop a deep learning network for extracting features from the blood-supplying regions of the lenticulostriate artery (LSA) and to establish these features as an imaging biomarker for the comprehensive assessment of the lenticulostriate artery-neural complex (LNC). Automatic segmentation of brain regions on T1-weighted images was performed, followed by the development of the ResNet18 framework to extract and visualize deep learning features from three regions of interest (ROIs). The root mean squared error (RMSE) was then used to assess the correlation between these features and fractional anisotropy (FA) values from diffusion tensor imaging (DTI) and cerebral blood flow (CBF) values from arterial spin labeling (ASL). The correlation of these features with LSA root numbers and three disease categories was further validated using fine-tuning classification (Task1 and Task2). Seventy-nine patients were enrolled and classified into three groups. No significant differences were found in the number of LSA roots between the right and left hemispheres, nor in the FA and CBF values of the ROIs. The RMSE loss, relative to the mean FA and CBF values across different ROI inputs, ranged from 0.154 to 0.213%. The model's accuracy in Task1 and Task2 fine-tuning classification reached 100%. Deep learning features extracted from the basal ganglia nuclei effectively reflect cerebrovascular and neurological functions and reveal the damage status of the LSA. This approach holds promise as a novel imaging biomarker for the comprehensive assessment of the LNC.

Research-based clinical deployment of artificial intelligence algorithm for prostate MRI.

Harmon SA, Tetreault J, Esengur OT, Qin M, Yilmaz EC, Chang V, Yang D, Xu Z, Cohen G, Plum J, Sherif T, Levin R, Schmidt-Richberg A, Thompson S, Coons S, Chen T, Choyke PL, Xu D, Gurram S, Wood BJ, Pinto PA, Turkbey B

pubmed logopapersMay 26 2025
A critical limitation to deployment and utilization of Artificial Intelligence (AI) algorithms in radiology practice is the actual integration of algorithms directly into the clinical Picture Archiving and Communications Systems (PACS). Here, we sought to integrate an AI-based pipeline for prostate organ and intraprostatic lesion segmentation within a clinical PACS environment to enable point-of-care utilization under a prospective clinical trial scenario. A previously trained, publicly available AI model for segmentation of intra-prostatic findings on multiparametric Magnetic Resonance Imaging (mpMRI) was converted into a containerized environment compatible with MONAI Deploy Express. An inference server and dedicated clinical PACS workflow were established within our institution for evaluation of real-time use of the AI algorithm. PACS-based deployment was prospectively evaluated in two phases: first, a consecutive cohort of patients undergoing diagnostic imaging at our institution and second, a consecutive cohort of patients undergoing biopsy based on mpMRI findings. The AI pipeline was executed from within the PACS environment by the radiologist. AI findings were imported into clinical biopsy planning software for target definition. Metrics analyzing deployment success, timing, and detection performance were recorded and summarized. In phase one, clinical PACS deployment was successfully executed in 57/58 cases and were obtained within one minute of activation (median 33 s [range 21-50 s]). Comparison with expert radiologist annotation demonstrated stable model performance compared to independent validation studies. In phase 2, 40/40 cases were successfully executed via PACS deployment and results were imported for biopsy targeting. Cancer detection rates for prostate cancer were 82.1% for ROI targets detected by both AI and radiologist, 47.8% in targets proposed by AI and accepted by radiologist, and 33.3% in targets identified by the radiologist alone. Integration of novel AI algorithms requiring multi-parametric input into clinical PACS environment is feasible and model outputs can be used for downstream clinical tasks.

Rate and Patient Specific Risk Factors for Periprosthetic Acetabular Fractures during Primary Total Hip Arthroplasty using a Pressfit Cup.

Simon S, Gobi H, Mitterer JA, Frank BJ, Huber S, Aichmair A, Dominkus M, Hofstaetter JG

pubmed logopapersMay 26 2025
Periprosthetic acetabular fractures following primary total hip arthroplasty (THA) using a cementless acetabular component range from occult to severe fractures. The aims of this study were to evaluate the perioperative periprosthetic acetabular fracture rate and patient-specific risks of a modular cementless acetabular component. In this study, we included 7,016 primary THAs (61.4% women, 38.6% men; age, 67 years; interquartile-range, 58 to 74) that received a cementless-hydroxyapatite-coated modular-titanium press-fit acetabular component from a single manufacturer between January 2013 and September 2022. All perioperative radiographs and CT (computer tomography) scans were analyzed for all causes. Patient-specific data and the revision rate were retrieved, and radiographic measurements were performed using artificial intelligence-based software. Following matching based on patients' demographics, a comparison was made between patients who had and did not have periacetabular fractures in order to identify patient-specific and radiographic risk factors for periacetabular fractures. The fracture rate was 0.8% (56 of 7,016). Overall, 33.9% (19 of 56) were small occult fractures solely visible on CT. Additionally, there were 21 of 56 (37.5%) with a stable small fracture. Both groups (40 of 56 (71.4%)) were treated nonoperatively. Revision THA was necessary in 16 of 56, resulting in an overall revision rate of 0.2% (16 of 7,016). Patient-specific risk factors were small acetabular-component size (≤ 50), a low body mass index (BMI) (< 24.5), a higher age (> 68 years), women, a low lateral-central-age-angle (< 24°), a high Extrusion-index (> 20%), a high sharp-angle (> 38°), and a high Tönnis-angle (> 10°). A wide range of periprosthetic acetabular fractures were observed following primary cementless THA. In total, 71.4% of acetabular fractures were small cracks that did not necessitate revision surgery. By identifying patient-specific risk factors, such as advanced age, women, low BMI, and dysplastic hips, future complications may be reduced.

Segmentation of the Left Ventricle and Its Pathologies for Acute Myocardial Infarction After Reperfusion in LGE-CMR Images.

Li S, Wu C, Feng C, Bian Z, Dai Y, Wu LM

pubmed logopapersMay 26 2025
Due to the association with higher incidence of left ventricular dysfunction and complications, segmentation of left ventricle and related pathological tissues: microvascular obstruction and myocardial infarction from late gadolinium enhancement cardiac magnetic resonance images is crucially important. However, lack of datasets, diverse shapes and locations, extreme imbalanced class, severe intensity distribution overlapping are the main challenges. We first release a late gadolinium enhancement cardiac magnetic resonance benchmark dataset LGE-LVP containing 140 patients with left ventricle myocardial infarction and concomitant microvascular obstruction. Then, a progressive deep learning model LVPSegNet is proposed to segment the left ventricle and its pathologies via adaptive region of interest extraction, sample augmentation, curriculum learning, and multiple receptive field fusion in dealing with the challenges. Comprehensive comparisons with state-of-the-art models on the internal and external datasets demonstrate that the proposed model performs the best on both geometric and clinical metrics and it most closely matched the clinician's performance. Overall, the released LGE-LVP dataset alongside the LVPSegNet we proposed offer a practical solution for automated left ventricular and its pathologies segmentation by providing data support and facilitating effective segmentation. The dataset and source codes will be released via https://github.com/DFLAG-NEU/LVPSegNet.

Impact of contrast-enhanced agent on segmentation using a deep learning-based software "Ai-Seg" for head and neck cancer.

Kihara S, Ueda Y, Harada S, Masaoka A, Kanayama N, Ikawa T, Inui S, Akagi T, Nishio T, Konishi K

pubmed logopapersMay 26 2025
In radiotherapy, auto-segmentation tools using deep learning assist in contouring organs-at-risk (OARs). We developed a segmentation model for head and neck (HN) OARs dedicated to contrast-enhanced (CE) computed tomography (CT) using the segmentation software, Ai-Seg, and compared the performance between CE and non-CE (nCE) CT. The retrospective study recruited 321 patients with HN cancers and trained a segmentation model using CE CT (CE model). The CE model was installed in Ai-Seg and applied to additional 25 patients with CE and nCE CT. The Dice similarity coefficient (DSC) and average Hausdorff distance (AHD) were calculated between the ground truth and Ai-Seg contours for brain, brainstem, chiasm, optic nerves, cochleae, oral cavity, parotid glands, pharyngeal constrictor muscle, and submandibular glands (SMGs). We compared the CE model and the existing model trained with nCE CT available in Ai-Seg for 6 OARs. The CE model obtained significantly higher DSCs on CE CT for parotid and SMGs compared to the existing model. The CE model provided significantly lower DSC values and higher AHD values on nCE CT for SMGs than on CE CT, but comparable values for other OARs. The CE model achieved significantly better performance than the existing model and can be used on nCE CT images without significant performance difference, except SMGs. Our results may facilitate the adoption of segmentation tools in clinical practice. We developed a segmentation model for HN OARs dedicated to CE CT using Ai-Seg and evaluated its usability on nCE CT.

tUbe net: a generalisable deep learning tool for 3D vessel segmentation

Holroyd, N. A., Li, Z., Walsh, C., Brown, E. E., Shipley, R. J., Walker-Samuel, S.

biorxiv logopreprintMay 26 2025
Deep learning has become an invaluable tool for bioimage analysis but, while open-source cell annotation software such as cellpose are widely used, an equivalent tool for three-dimensional (3D) vascular annotation does not exist. With the vascular system being directly impacted by a broad range of diseases, there is significant medical interest in quantitative analysis for vascular imaging. However, existing deep learning approaches for this task are specialised to particular tissue types or imaging modalities. We present a new deep learning model for segmentation of vasculature that is generalisable across tissues, modalities, scales and pathologies. To create a generalisable model, a 3D convolutional neural network was trained using data from multiple modalities including optical imaging, computational tomography and photoacoustic imaging. Through this varied training set, the model was forced to learn common features of vessels cross-modality and scale. Following this, the general model was fine-tuned to different applications with a minimal amount of manually labelled ground truth data. It was found that the general model could be specialised to segment new datasets, with a high degree of accuracy, using as little as 0.3% of the volume of that dataset for fine-tuning. As such, this model enables users to produce accurate segmentations of 3D vascular networks without the need to label large amounts of training data.

Evolution of deep learning tooth segmentation from CT/CBCT images: a systematic review and meta-analysis.

Kot WY, Au Yeung SY, Leung YY, Leung PH, Yang WF

pubmed logopapersMay 26 2025
Deep learning has been utilized to segment teeth from computed tomography (CT) or cone-beam CT (CBCT). However, the performance of deep learning is unknown due to multiple models and diverse evaluation metrics. This systematic review and meta-analysis aims to evaluate the evolution and performance of deep learning in tooth segmentation. We systematically searched PubMed, Web of Science, Scopus, IEEE Xplore, arXiv.org, and ACM for studies investigating deep learning in human tooth segmentation from CT/CBCT. Included studies were assessed using the Quality Assessment of Diagnostic Accuracy Study (QUADAS-2) tool. Data were extracted for meta-analyses by random-effects models. A total of 30 studies were included in the systematic review, and 28 of them were included for meta-analyses. Various deep learning algorithms were categorized according to the backbone network, encompassing single-stage convolutional models, convolutional models with U-Net architecture, Transformer models, convolutional models with attention mechanisms, and combinations of multiple models. Convolutional models with U-Net architecture were the most commonly used deep learning algorithms. The integration of attention mechanism within convolutional models has become a new topic. 29 evaluation metrics were identified, with Dice Similarity Coefficient (DSC) being the most popular. The pooled results were 0.93 [0.93, 0.93] for DSC, 0.86 [0.85, 0.87] for Intersection over Union (IoU), 0.22 [0.19, 0.24] for Average Symmetric Surface Distance (ASSD), 0.92 [0.90, 0.94] for sensitivity, 0.71 [0.26, 1.17] for 95% Hausdorff distance, and 0.96 [0.93, 0.98] for precision. No significant difference was observed in the segmentation of single-rooted or multi-rooted teeth. No obvious correlation between sample size and segmentation performance was observed. Multiple deep learning algorithms have been successfully applied to tooth segmentation from CT/CBCT and their evolution has been well summarized and categorized according to their backbone structures. In future, studies are needed with standardized protocols and open labelled datasets.

Applications of artificial intelligence in abdominal imaging.

Gupta A, Rajamohan N, Bansal B, Chaudhri S, Chandarana H, Bagga B

pubmed logopapersMay 26 2025
The rapid advancements in artificial intelligence (AI) carry the promise to reshape abdominal imaging by offering transformative solutions to challenges in disease detection, classification, and personalized care. AI applications, particularly those leveraging deep learning and radiomics, have demonstrated remarkable accuracy in detecting a wide range of abdominal conditions, including but not limited to diffuse liver parenchymal disease, focal liver lesions, pancreatic ductal adenocarcinoma (PDAC), renal tumors, and bowel pathologies. These models excel in the automation of tasks such as segmentation, classification, and prognostication across modalities like ultrasound, CT, and MRI, often surpassing traditional diagnostic methods. Despite these advancements, widespread adoption remains limited by challenges such as data heterogeneity, lack of multicenter validation, reliance on retrospective single-center studies, and the "black box" nature of many AI models, which hinder interpretability and clinician trust. The absence of standardized imaging protocols and reference gold standards further complicates integration into clinical workflows. To address these barriers, future directions emphasize collaborative multi-center efforts to generate diverse, standardized datasets, integration of explainable AI frameworks to existing picture archiving and communication systems, and the development of automated, end-to-end pipelines capable of processing multi-source data. Targeted clinical applications, such as early detection of PDAC, improved segmentation of renal tumors, and improved risk stratification in liver diseases, show potential to refine diagnostic accuracy and therapeutic planning. Ethical considerations, such as data privacy, regulatory compliance, and interdisciplinary collaboration, are essential for successful translation into clinical practice. AI's transformative potential in abdominal imaging lies not only in complementing radiologists but also in fostering precision medicine by enabling faster, more accurate, and patient-centered care. Overcoming current limitations through innovation and collaboration will be pivotal in realizing AI's full potential to improve patient outcomes and redefine the landscape of abdominal radiology.

The extent of Skeletal muscle wasting in prolonged critical illness and its association with survival: insights from a retrospective single-center study.

Kolck J, Hosse C, Fehrenbach U, Beetz NL, Auer TA, Pille C, Geisel D

pubmed logopapersMay 26 2025
Muscle wasting in critically ill patients, particularly those with prolonged hospitalization, poses a significant challenge to recovery and long-term outcomes. The aim of this study was to characterize long-term muscle wasting trajectories in ICU patients with acute respiratory distress syndrome (ARDS) due to COVID-19 and acute pancreatitis (AP), to evaluate correlations between muscle wasting and patient outcomes, and to identify clinically feasible thresholds that have the potential to enhance patient care strategies. A collective of 154 ICU patients (100 AP and 54 COVID-19 ARDS) with a minimum ICU stay of 10 days and at least three abdominal CT scans were retrospectively analyzed. AI-driven segmentation of CT scans quantified changes in psoas muscle area (PMA). A mixed model analysis was used to assess the correlation between mortality and muscle wasting, Cox regression was applied to identify potential predictors of survival. Muscle loss rates, survival thresholds and outcome correlations were assessed using Kaplan-Meier and receiver operating characteristic (ROC) analyses. Muscle loss in ICU patients was most pronounced in the first two weeks, peaking at -2.42% and - 2.39% psoas muscle area (PMA) loss per day in weeks 1 and 2, respectively, followed by a progressive decline. The median total PMA loss was 48.3%, with significantly greater losses in non-survivors. Mixed model analysis confirmed correlation of muscle wasting with mortality. Cox regression identified visceral adipose tissue (VAT), sequential organ failure assessment (SOFA) score and muscle wasting as significant risk factors, while increased skeletal muscle area (SMA) was protective. ROC and Kaplan-Meier analyses showed strong correlations between PMA loss thresholds and survival, with daily loss > 4% predicting the worst survival (39.7%). To our knowledge, This is the first study to highlight the substantial progression of muscle wasting in prolonged hospitalized ICU patients. The mortality-related thresholds for muscle wasting rates identified in this study may provide a basis for clinical risk stratification. Future research should validate these findings in larger cohorts and explore strategies to mitigate muscle loss. Not applicable.

Rep3D: Re-parameterize Large 3D Kernels with Low-Rank Receptive Modeling for Medical Imaging

Ho Hin Lee, Quan Liu, Shunxing Bao, Yuankai Huo, Bennett A. Landman

arxiv logopreprintMay 26 2025
In contrast to vision transformers, which model long-range dependencies through global self-attention, large kernel convolutions provide a more efficient and scalable alternative, particularly in high-resolution 3D volumetric settings. However, naively increasing kernel size often leads to optimization instability and degradation in performance. Motivated by the spatial bias observed in effective receptive fields (ERFs), we hypothesize that different kernel elements converge at variable rates during training. To support this, we derive a theoretical connection between element-wise gradients and first-order optimization, showing that structurally re-parameterized convolution blocks inherently induce spatially varying learning rates. Building on this insight, we introduce Rep3D, a 3D convolutional framework that incorporates a learnable spatial prior into large kernel training. A lightweight two-stage modulation network generates a receptive-biased scaling mask, adaptively re-weighting kernel updates and enabling local-to-global convergence behavior. Rep3D adopts a plain encoder design with large depthwise convolutions, avoiding the architectural complexity of multi-branch compositions. We evaluate Rep3D on five challenging 3D segmentation benchmarks and demonstrate consistent improvements over state-of-the-art baselines, including transformer-based and fixed-prior re-parameterization methods. By unifying spatial inductive bias with optimization-aware learning, Rep3D offers an interpretable, and scalable solution for 3D medical image analysis. The source code is publicly available at https://github.com/leeh43/Rep3D.
Page 41 of 58578 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.