Sort by:
Page 115 of 1331322 results

Multicentre evaluation of deep learning CT autosegmentation of the head and neck region for radiotherapy.

Pang EPP, Tan HQ, Wang F, Niemelä J, Bolard G, Ramadan S, Kiljunen T, Capala M, Petit S, Seppälä J, Vuolukka K, Kiitam I, Zolotuhhin D, Gershkevitsh E, Lehtiö K, Nikkinen J, Keyriläinen J, Mokka M, Chua MLK

pubmed logopapersMay 27 2025
This is a multi-institutional study to evaluate a head-and-neck CT auto-segmentation software across seven institutions globally. 11 lymph node levels and 7 organs-at-risk contours were evaluated in a two-phase study design. Time savings were measured in both phases, and the inter-observer variability across the seven institutions was quantified in phase two. Overall time savings were found to be 42% in phase one and 49% in phase two. Lymph node levels IA, IB, III, IVA, and IVB showed no significant time savings, with some centers reporting longer editing times than manual delineation. All the edited ROIs showed reduced inter-observer variability compared to manual segmentation. Our study shows that auto-segmentation plays a crucial role in harmonizing contouring practices globally. However, the clinical benefits of auto-segmentation software vary significantly across ROIs and between clinics. To maximize its potential, institution-specific commissioning is required to optimize the clinical benefits.

Development of an Open-Source Algorithm for Automated Segmentation in Clinician-Led Paranasal Sinus Radiologic Research.

Darbari Kaul R, Zhong W, Liu S, Azemi G, Liang K, Zou E, Sacks PL, Thiel C, Campbell RG, Kalish L, Sacks R, Di Ieva A, Harvey RJ

pubmed logopapersMay 27 2025
Artificial Intelligence (AI) research needs to be clinician led; however, expertise typically lies outside their skill set. Collaborations exist but are often commercially driven. Free and open-source computational algorithms and software expertise are required for meaningful clinically driven AI medical research. Deep learning algorithms automate segmenting regions of interest for analysis and clinical translation. Numerous studies have automatically segmented paranasal sinus computed tomography (CT) scans; however, openly accessible algorithms capturing the sinonasal cavity remain scarce. The purpose of this study was to validate and provide an open-source segmentation algorithm for paranasal sinus CTs for the otolaryngology research community. A cross-sectional comparative study was conducted with a deep learning algorithm, UNet++, modified for automatic segmentation of paranasal sinuses CTs and "ground-truth" manual segmentations. A dataset of 100 paranasal sinuses scans was manually segmented, with an 80/20 training/testing split. The algorithm is available at https://github.com/rheadkaul/SinusSegment. Primary outcomes included the Dice similarity coefficient (DSC) score, Intersection over Union (IoU), Hausdorff distance (HD), sensitivity, specificity, and visual similarity grading. Twenty scans representing 7300 slices were assessed. The mean DSC was 0.87 and IoU 0.80, with HD 33.61 mm. The mean sensitivity was 83.98% and specificity 99.81%. The median visual similarity grading score was 3 (good). There were no statistically significant differences in outcomes with normal or diseased paranasal sinus CTs. Automatic segmentation of CT paranasal sinuses yields good results when compared with manual segmentation. This study provides an open-source segmentation algorithm as a foundation and gateway for more complex AI-based analysis of large datasets.

Detecting microcephaly and macrocephaly from ultrasound images using artificial intelligence.

Mengistu AK, Assaye BT, Flatie AB, Mossie Z

pubmed logopapersMay 26 2025
Microcephaly and macrocephaly, which are abnormal congenital markers, are associated with developmental and neurologic deficits. Hence, there is a medically imperative need to conduct ultrasound imaging early on. However, resource-limited countries such as Ethiopia are confronted with inadequacies such that access to trained personnel and diagnostic machines inhibits the exact and continuous diagnosis from being met. This study aims to develop a fetal head abnormality detection model from ultrasound images via deep learning. Data were collected from three Ethiopian healthcare facilities to increase model generalizability. The recruitment period for this study started on November 9, 2024, and ended on November 30, 2024. Several preprocessing techniques have been performed, such as augmentation, noise reduction, and normalization. SegNet, UNet, FCN, MobileNetV2, and EfficientNet-B0 were applied to segment and measure fetal head structures using ultrasound images. The measurements were classified as microcephaly, macrocephaly, or normal using WHO guidelines for gestational age, and then the model performance was compared with that of existing industry experts. The metrics used for evaluation included accuracy, precision, recall, the F1 score, and the Dice coefficient. This study was able to demonstrate the feasibility of using SegNet for automatic segmentation, measurement of abnormalities of the fetal head, and classification of macrocephaly and microcephaly, with an accuracy of 98% and a Dice coefficient of 0.97. Compared with industry experts, the model achieved accuracies of 92.5% and 91.2% for the BPD and HC measurements, respectively. Deep learning models can enhance prenatal diagnosis workflows, especially in resource-constrained settings. Future work needs to be done on optimizing model performance, trying complex models, and expanding datasets to improve generalizability. If these technologies are adopted, they can be used in prenatal care delivery. Not applicable.

A novel MRI-based deep learning imaging biomarker for comprehensive assessment of the lenticulostriate artery-neural complex.

Song Y, Jin Y, Wei J, Wang J, Zheng Z, Wang Y, Zeng R, Lu W, Huang B

pubmed logopapersMay 26 2025
To develop a deep learning network for extracting features from the blood-supplying regions of the lenticulostriate artery (LSA) and to establish these features as an imaging biomarker for the comprehensive assessment of the lenticulostriate artery-neural complex (LNC). Automatic segmentation of brain regions on T1-weighted images was performed, followed by the development of the ResNet18 framework to extract and visualize deep learning features from three regions of interest (ROIs). The root mean squared error (RMSE) was then used to assess the correlation between these features and fractional anisotropy (FA) values from diffusion tensor imaging (DTI) and cerebral blood flow (CBF) values from arterial spin labeling (ASL). The correlation of these features with LSA root numbers and three disease categories was further validated using fine-tuning classification (Task1 and Task2). Seventy-nine patients were enrolled and classified into three groups. No significant differences were found in the number of LSA roots between the right and left hemispheres, nor in the FA and CBF values of the ROIs. The RMSE loss, relative to the mean FA and CBF values across different ROI inputs, ranged from 0.154 to 0.213%. The model's accuracy in Task1 and Task2 fine-tuning classification reached 100%. Deep learning features extracted from the basal ganglia nuclei effectively reflect cerebrovascular and neurological functions and reveal the damage status of the LSA. This approach holds promise as a novel imaging biomarker for the comprehensive assessment of the LNC.

Rate and Patient Specific Risk Factors for Periprosthetic Acetabular Fractures during Primary Total Hip Arthroplasty using a Pressfit Cup.

Simon S, Gobi H, Mitterer JA, Frank BJ, Huber S, Aichmair A, Dominkus M, Hofstaetter JG

pubmed logopapersMay 26 2025
Periprosthetic acetabular fractures following primary total hip arthroplasty (THA) using a cementless acetabular component range from occult to severe fractures. The aims of this study were to evaluate the perioperative periprosthetic acetabular fracture rate and patient-specific risks of a modular cementless acetabular component. In this study, we included 7,016 primary THAs (61.4% women, 38.6% men; age, 67 years; interquartile-range, 58 to 74) that received a cementless-hydroxyapatite-coated modular-titanium press-fit acetabular component from a single manufacturer between January 2013 and September 2022. All perioperative radiographs and CT (computer tomography) scans were analyzed for all causes. Patient-specific data and the revision rate were retrieved, and radiographic measurements were performed using artificial intelligence-based software. Following matching based on patients' demographics, a comparison was made between patients who had and did not have periacetabular fractures in order to identify patient-specific and radiographic risk factors for periacetabular fractures. The fracture rate was 0.8% (56 of 7,016). Overall, 33.9% (19 of 56) were small occult fractures solely visible on CT. Additionally, there were 21 of 56 (37.5%) with a stable small fracture. Both groups (40 of 56 (71.4%)) were treated nonoperatively. Revision THA was necessary in 16 of 56, resulting in an overall revision rate of 0.2% (16 of 7,016). Patient-specific risk factors were small acetabular-component size (≤ 50), a low body mass index (BMI) (< 24.5), a higher age (> 68 years), women, a low lateral-central-age-angle (< 24°), a high Extrusion-index (> 20%), a high sharp-angle (> 38°), and a high Tönnis-angle (> 10°). A wide range of periprosthetic acetabular fractures were observed following primary cementless THA. In total, 71.4% of acetabular fractures were small cracks that did not necessitate revision surgery. By identifying patient-specific risk factors, such as advanced age, women, low BMI, and dysplastic hips, future complications may be reduced.

Research-based clinical deployment of artificial intelligence algorithm for prostate MRI.

Harmon SA, Tetreault J, Esengur OT, Qin M, Yilmaz EC, Chang V, Yang D, Xu Z, Cohen G, Plum J, Sherif T, Levin R, Schmidt-Richberg A, Thompson S, Coons S, Chen T, Choyke PL, Xu D, Gurram S, Wood BJ, Pinto PA, Turkbey B

pubmed logopapersMay 26 2025
A critical limitation to deployment and utilization of Artificial Intelligence (AI) algorithms in radiology practice is the actual integration of algorithms directly into the clinical Picture Archiving and Communications Systems (PACS). Here, we sought to integrate an AI-based pipeline for prostate organ and intraprostatic lesion segmentation within a clinical PACS environment to enable point-of-care utilization under a prospective clinical trial scenario. A previously trained, publicly available AI model for segmentation of intra-prostatic findings on multiparametric Magnetic Resonance Imaging (mpMRI) was converted into a containerized environment compatible with MONAI Deploy Express. An inference server and dedicated clinical PACS workflow were established within our institution for evaluation of real-time use of the AI algorithm. PACS-based deployment was prospectively evaluated in two phases: first, a consecutive cohort of patients undergoing diagnostic imaging at our institution and second, a consecutive cohort of patients undergoing biopsy based on mpMRI findings. The AI pipeline was executed from within the PACS environment by the radiologist. AI findings were imported into clinical biopsy planning software for target definition. Metrics analyzing deployment success, timing, and detection performance were recorded and summarized. In phase one, clinical PACS deployment was successfully executed in 57/58 cases and were obtained within one minute of activation (median 33 s [range 21-50 s]). Comparison with expert radiologist annotation demonstrated stable model performance compared to independent validation studies. In phase 2, 40/40 cases were successfully executed via PACS deployment and results were imported for biopsy targeting. Cancer detection rates for prostate cancer were 82.1% for ROI targets detected by both AI and radiologist, 47.8% in targets proposed by AI and accepted by radiologist, and 33.3% in targets identified by the radiologist alone. Integration of novel AI algorithms requiring multi-parametric input into clinical PACS environment is feasible and model outputs can be used for downstream clinical tasks.

Segmentation of the Left Ventricle and Its Pathologies for Acute Myocardial Infarction After Reperfusion in LGE-CMR Images.

Li S, Wu C, Feng C, Bian Z, Dai Y, Wu LM

pubmed logopapersMay 26 2025
Due to the association with higher incidence of left ventricular dysfunction and complications, segmentation of left ventricle and related pathological tissues: microvascular obstruction and myocardial infarction from late gadolinium enhancement cardiac magnetic resonance images is crucially important. However, lack of datasets, diverse shapes and locations, extreme imbalanced class, severe intensity distribution overlapping are the main challenges. We first release a late gadolinium enhancement cardiac magnetic resonance benchmark dataset LGE-LVP containing 140 patients with left ventricle myocardial infarction and concomitant microvascular obstruction. Then, a progressive deep learning model LVPSegNet is proposed to segment the left ventricle and its pathologies via adaptive region of interest extraction, sample augmentation, curriculum learning, and multiple receptive field fusion in dealing with the challenges. Comprehensive comparisons with state-of-the-art models on the internal and external datasets demonstrate that the proposed model performs the best on both geometric and clinical metrics and it most closely matched the clinician's performance. Overall, the released LGE-LVP dataset alongside the LVPSegNet we proposed offer a practical solution for automated left ventricular and its pathologies segmentation by providing data support and facilitating effective segmentation. The dataset and source codes will be released via https://github.com/DFLAG-NEU/LVPSegNet.

tUbe net: a generalisable deep learning tool for 3D vessel segmentation

Holroyd, N. A., Li, Z., Walsh, C., Brown, E. E., Shipley, R. J., Walker-Samuel, S.

biorxiv logopreprintMay 26 2025
Deep learning has become an invaluable tool for bioimage analysis but, while open-source cell annotation software such as cellpose are widely used, an equivalent tool for three-dimensional (3D) vascular annotation does not exist. With the vascular system being directly impacted by a broad range of diseases, there is significant medical interest in quantitative analysis for vascular imaging. However, existing deep learning approaches for this task are specialised to particular tissue types or imaging modalities. We present a new deep learning model for segmentation of vasculature that is generalisable across tissues, modalities, scales and pathologies. To create a generalisable model, a 3D convolutional neural network was trained using data from multiple modalities including optical imaging, computational tomography and photoacoustic imaging. Through this varied training set, the model was forced to learn common features of vessels cross-modality and scale. Following this, the general model was fine-tuned to different applications with a minimal amount of manually labelled ground truth data. It was found that the general model could be specialised to segment new datasets, with a high degree of accuracy, using as little as 0.3% of the volume of that dataset for fine-tuning. As such, this model enables users to produce accurate segmentations of 3D vascular networks without the need to label large amounts of training data.

Applications of artificial intelligence in abdominal imaging.

Gupta A, Rajamohan N, Bansal B, Chaudhri S, Chandarana H, Bagga B

pubmed logopapersMay 26 2025
The rapid advancements in artificial intelligence (AI) carry the promise to reshape abdominal imaging by offering transformative solutions to challenges in disease detection, classification, and personalized care. AI applications, particularly those leveraging deep learning and radiomics, have demonstrated remarkable accuracy in detecting a wide range of abdominal conditions, including but not limited to diffuse liver parenchymal disease, focal liver lesions, pancreatic ductal adenocarcinoma (PDAC), renal tumors, and bowel pathologies. These models excel in the automation of tasks such as segmentation, classification, and prognostication across modalities like ultrasound, CT, and MRI, often surpassing traditional diagnostic methods. Despite these advancements, widespread adoption remains limited by challenges such as data heterogeneity, lack of multicenter validation, reliance on retrospective single-center studies, and the "black box" nature of many AI models, which hinder interpretability and clinician trust. The absence of standardized imaging protocols and reference gold standards further complicates integration into clinical workflows. To address these barriers, future directions emphasize collaborative multi-center efforts to generate diverse, standardized datasets, integration of explainable AI frameworks to existing picture archiving and communication systems, and the development of automated, end-to-end pipelines capable of processing multi-source data. Targeted clinical applications, such as early detection of PDAC, improved segmentation of renal tumors, and improved risk stratification in liver diseases, show potential to refine diagnostic accuracy and therapeutic planning. Ethical considerations, such as data privacy, regulatory compliance, and interdisciplinary collaboration, are essential for successful translation into clinical practice. AI's transformative potential in abdominal imaging lies not only in complementing radiologists but also in fostering precision medicine by enabling faster, more accurate, and patient-centered care. Overcoming current limitations through innovation and collaboration will be pivotal in realizing AI's full potential to improve patient outcomes and redefine the landscape of abdominal radiology.

Impact of contrast-enhanced agent on segmentation using a deep learning-based software "Ai-Seg" for head and neck cancer.

Kihara S, Ueda Y, Harada S, Masaoka A, Kanayama N, Ikawa T, Inui S, Akagi T, Nishio T, Konishi K

pubmed logopapersMay 26 2025
In radiotherapy, auto-segmentation tools using deep learning assist in contouring organs-at-risk (OARs). We developed a segmentation model for head and neck (HN) OARs dedicated to contrast-enhanced (CE) computed tomography (CT) using the segmentation software, Ai-Seg, and compared the performance between CE and non-CE (nCE) CT. The retrospective study recruited 321 patients with HN cancers and trained a segmentation model using CE CT (CE model). The CE model was installed in Ai-Seg and applied to additional 25 patients with CE and nCE CT. The Dice similarity coefficient (DSC) and average Hausdorff distance (AHD) were calculated between the ground truth and Ai-Seg contours for brain, brainstem, chiasm, optic nerves, cochleae, oral cavity, parotid glands, pharyngeal constrictor muscle, and submandibular glands (SMGs). We compared the CE model and the existing model trained with nCE CT available in Ai-Seg for 6 OARs. The CE model obtained significantly higher DSCs on CE CT for parotid and SMGs compared to the existing model. The CE model provided significantly lower DSC values and higher AHD values on nCE CT for SMGs than on CE CT, but comparable values for other OARs. The CE model achieved significantly better performance than the existing model and can be used on nCE CT images without significant performance difference, except SMGs. Our results may facilitate the adoption of segmentation tools in clinical practice. We developed a segmentation model for HN OARs dedicated to CE CT using Ai-Seg and evaluated its usability on nCE CT.
Page 115 of 1331322 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.