Sort by:
Page 13 of 31302 results

Large Scale MRI Collection and Segmentation of Cirrhotic Liver.

Jha D, Susladkar OK, Gorade V, Keles E, Antalek M, Seyithanoglu D, Cebeci T, Aktas HE, Kartal GD, Kaymakoglu S, Erturk SM, Velichko Y, Ladner DP, Borhani AA, Medetalibeyoglu A, Durak G, Bagci U

pubmed logopapersMay 28 2025
Liver cirrhosis represents the end stage of chronic liver disease, characterized by extensive fibrosis and nodular regeneration that significantly increases mortality risk. While magnetic resonance imaging (MRI) offers a non-invasive assessment, accurately segmenting cirrhotic livers presents substantial challenges due to morphological alterations and heterogeneous signal characteristics. Deep learning approaches show promise for automating these tasks, but progress has been limited by the absence of large-scale, annotated datasets. Here, we present CirrMRI600+, the first comprehensive dataset comprising 628 high-resolution abdominal MRI scans (310 T1-weighted and 318 T2-weighted sequences, totaling nearly 40,000 annotated slices) with expert-validated segmentation labels for cirrhotic livers. The dataset includes demographic information, clinical parameters, and histopathological validation where available. Additionally, we provide benchmark results from 11 state-of-the-art deep learning experiments to establish performance standards. CirrMRI600+ enables the development and validation of advanced computational methods for cirrhotic liver analysis, potentially accelerating progress toward automated Cirrhosis visual staging and personalized treatment planning.

Deep Learning-Based Fully Automated Aortic Valve Leaflets and Root Measurement From Computed Tomography Images - A Feasibility Study.

Yamauchi H, Aoyama G, Tsukihara H, Ino K, Tomii N, Takagi S, Fujimoto K, Sakaguchi T, Sakuma I, Ono M

pubmed logopapersMay 28 2025
The aim of this study was to retrain our existing deep learning-based fully automated aortic valve leaflets/root measurement algorithm, using computed tomography (CT) data for root dilatation (RD), and assess its clinical feasibility. 67 ECG-gated cardiac CT scans were retrospectively collected from 40 patients with RD to retrain the algorithm. An additional 100 patients' CT data with aortic stenosis (AS, n=50) and aortic regurgitation (AR) with/without RD (n=50) were collected to evaluate the algorithm. 45 AR patients had RD. The algorithm provided patient-specific 3-dimensional aortic valve/root visualization. The measurements of 100 cases automatically obtained by the algorithm were compared with an expert's manual measurements. Overall, there was a moderate-to-high correlation, with differences of 6.1-13.4 mm<sup>2</sup>for the virtual basal ring area, 1.1-2.6 mm for sinus diameter, 0.1-0.6 mm for coronary artery height, 0.2-0.5 mm for geometric height, and 0.9 mm for effective height, except for the sinotubular junction of the AR cases (10.3 mm) with an indefinite borderline over the dilated sinuses, compared with 2.1 mm in AS cases. The measurement time (122 s) per case by the algorithm was significantly shorter than those of the experts (618-1,126 s). This fully automated algorithm can assist in evaluating aortic valve/root anatomy for planning surgical and transcatheter treatments while saving time and minimizing workload.

A Left Atrial Positioning System to Enable Follow-Up and Cohort Studies.

Mehringer NJ, McVeigh ER

pubmed logopapersMay 27 2025
We present a new algorithm to automatically convert 3-dimensional left atrium surface meshes into a standard 2-dimensional space: a Left Atrial Positioning System (LAPS). Forty-five contrast-enhanced 4- dimensional computed tomography datasets were collected from 30 subjects. The left atrium volume was segmented using a trained neural network and converted into a surface mesh. LAPS coordinates were calculated on each mesh by computing lines of longitude and latitude on the surface of the mesh with reference to the center of the posterior wall and the mitral valve. LAPS accuracy was evaluated with one-way transfer of coordinates from a template mesh to a synthetic ground truth, which was created by registering the template mesh and pre-calculated LAPS coordinates to a target mesh. The Euclidian distance error was measured between each test node and its ground truth location. The median point transfer error was 2.13 mm between follow-up scans of the same subject (n = 15) and 3.99 mm between different subjects (n = 30). The left atrium was divided into 24 anatomic regions and represented on a 2D square diagram. The Left Atrial Positioning System is fully automatic, accurate, robust to anatomic variation, and has flexible visualization for mapping data in the left atrium. This provides a framework for comparing regional LA surface data values in both follow-up and cohort studies.

Development of an Open-Source Algorithm for Automated Segmentation in Clinician-Led Paranasal Sinus Radiologic Research.

Darbari Kaul R, Zhong W, Liu S, Azemi G, Liang K, Zou E, Sacks PL, Thiel C, Campbell RG, Kalish L, Sacks R, Di Ieva A, Harvey RJ

pubmed logopapersMay 27 2025
Artificial Intelligence (AI) research needs to be clinician led; however, expertise typically lies outside their skill set. Collaborations exist but are often commercially driven. Free and open-source computational algorithms and software expertise are required for meaningful clinically driven AI medical research. Deep learning algorithms automate segmenting regions of interest for analysis and clinical translation. Numerous studies have automatically segmented paranasal sinus computed tomography (CT) scans; however, openly accessible algorithms capturing the sinonasal cavity remain scarce. The purpose of this study was to validate and provide an open-source segmentation algorithm for paranasal sinus CTs for the otolaryngology research community. A cross-sectional comparative study was conducted with a deep learning algorithm, UNet++, modified for automatic segmentation of paranasal sinuses CTs and "ground-truth" manual segmentations. A dataset of 100 paranasal sinuses scans was manually segmented, with an 80/20 training/testing split. The algorithm is available at https://github.com/rheadkaul/SinusSegment. Primary outcomes included the Dice similarity coefficient (DSC) score, Intersection over Union (IoU), Hausdorff distance (HD), sensitivity, specificity, and visual similarity grading. Twenty scans representing 7300 slices were assessed. The mean DSC was 0.87 and IoU 0.80, with HD 33.61 mm. The mean sensitivity was 83.98% and specificity 99.81%. The median visual similarity grading score was 3 (good). There were no statistically significant differences in outcomes with normal or diseased paranasal sinus CTs. Automatic segmentation of CT paranasal sinuses yields good results when compared with manual segmentation. This study provides an open-source segmentation algorithm as a foundation and gateway for more complex AI-based analysis of large datasets.

Multicentre evaluation of deep learning CT autosegmentation of the head and neck region for radiotherapy.

Pang EPP, Tan HQ, Wang F, Niemelä J, Bolard G, Ramadan S, Kiljunen T, Capala M, Petit S, Seppälä J, Vuolukka K, Kiitam I, Zolotuhhin D, Gershkevitsh E, Lehtiö K, Nikkinen J, Keyriläinen J, Mokka M, Chua MLK

pubmed logopapersMay 27 2025
This is a multi-institutional study to evaluate a head-and-neck CT auto-segmentation software across seven institutions globally. 11 lymph node levels and 7 organs-at-risk contours were evaluated in a two-phase study design. Time savings were measured in both phases, and the inter-observer variability across the seven institutions was quantified in phase two. Overall time savings were found to be 42% in phase one and 49% in phase two. Lymph node levels IA, IB, III, IVA, and IVB showed no significant time savings, with some centers reporting longer editing times than manual delineation. All the edited ROIs showed reduced inter-observer variability compared to manual segmentation. Our study shows that auto-segmentation plays a crucial role in harmonizing contouring practices globally. However, the clinical benefits of auto-segmentation software vary significantly across ROIs and between clinics. To maximize its potential, institution-specific commissioning is required to optimize the clinical benefits.

Automated Body Composition Analysis Using DAFS Express on 2D MRI Slices at L3 Vertebral Level.

Akella V, Bagherinasab R, Lee H, Li JM, Nguyen L, Salehin M, Chow VTY, Popuri K, Beg MF

pubmed logopapersMay 27 2025
Body composition analysis is vital in assessing health conditions such as obesity, sarcopenia, and metabolic syndromes. MRI provides detailed images of skeletal muscle (SM), visceral adipose tissue (VAT), and subcutaneous adipose tissue (SAT), but their manual segmentation is labor-intensive and limits clinical applicability. This study validates an automated tool for MRI-based 2D body composition analysis (Data Analysis Facilitation Suite (DAFS) Express), comparing its automated measurements with expert manual segmentations using UK Biobank data. A cohort of 399 participants from the UK Biobank dataset was selected, yielding 423 single L3 slices for analysis. DAFS Express performed automated segmentations of SM, VAT, and SAT, which were then manually corrected by expert raters for validation. Evaluation metrics included Jaccard coefficients, Dice scores, intraclass correlation coefficients (ICCs), and Bland-Altman Plots to assess segmentation agreement and reliability. High agreements were observed between automated and manual segmentations with mean Jaccard scores: SM 99.03%, VAT 95.25%, and SAT 99.57%, and mean Dice scores: SM 99.51%, VAT 97.41%, and SAT 99.78%. Cross-sectional area comparisons showed consistent measurements, with automated methods closely matching manual measurements for SM and SAT, and slightly higher values for VAT (SM: auto 132.51 cm<sup>2</sup>, manual 132.36 cm<sup>2</sup>; VAT: auto 137.07 cm<sup>2</sup>, manual 134.46 cm<sup>2</sup>; SAT: auto 203.39 cm<sup>2</sup>, manual 202.85 cm<sup>2</sup>). ICCs confirmed strong reliability (SM 0.998, VAT 0.994, SAT 0.994). Bland-Altman plots revealed minimal biases, and boxplots illustrated distribution similarities across SM, VAT, and SAT areas. On average, DAFS Express took 18 s per DICOM for a total of 126.9 min for 423 images to output segmentations and measurement PDF's per DICOM. Automated segmentation of SM, VAT, and SAT from 2D MRI images using DAFS Express showed comparable accuracy to manual segmentation. This underscores its potential to streamline image analysis processes in research and clinical settings, enhancing diagnostic accuracy and efficiency. Future work should focus on further validation across diverse clinical applications and imaging conditions.

Deep Learning Auto-segmentation of Diffuse Midline Glioma on Multimodal Magnetic Resonance Images.

Fernández-Patón M, Montoya-Filardi A, Galiana-Bordera A, Martínez-Gironés PM, Veiga-Canuto D, Martínez de Las Heras B, Cerdá-Alberich L, Martí-Bonmatí L

pubmed logopapersMay 27 2025
Diffuse midline glioma (DMG) H3 K27M-altered is a rare pediatric brainstem cancer with poor prognosis. To advance the development of predictive models to gain a deeper understanding of DMG, there is a crucial need for seamlessly integrating automatic and highly accurate tumor segmentation techniques. There is only one method that tries to solve this task in this cancer; for that reason, this study develops a modified CNN-based 3D-Unet tool to automatically segment DMG in an accurate way in magnetic resonance (MR) images. The dataset consisted of 52 DMG patients and 70 images, each with T1W and T2W or FLAIR images. Three different datasets were created: T1W images, T2W or FLAIR images, and a combined set of T1W and T2W/FLAIR images. Denoising, bias field correction, spatial resampling, and normalization were applied as preprocessing steps to the MR images. Patching techniques were also used to enlarge the dataset size. For tumor segmentation, a 3D U-Net architecture with residual blocks was used. The best results were obtained for the dataset composed of all T1W and T2W/FLAIR images, reaching an average Dice Similarity Coefficient (DSC) of 0.883 on the test dataset. These results are comparable to other brain tumor segmentation models and to state-of-the-art results in DMG segmentation using fewer sequences. Our results demonstrate the effectiveness of the proposed 3D U-Net architecture for DMG tumor segmentation. This advancement holds potential for enhancing the precision of diagnostic and predictive models in the context of this challenging pediatric cancer.

PlaNet-S: an Automatic Semantic Segmentation Model for Placenta Using U-Net and SegNeXt.

Saito I, Yamamoto S, Takaya E, Harigai A, Sato T, Kobayashi T, Takase K, Ueda T

pubmed logopapersMay 27 2025
This study aimed to develop a fully automated semantic placenta segmentation model that integrates the U-Net and SegNeXt architectures through ensemble learning. A total of 218 pregnant women with suspected placental abnormalities who underwent magnetic resonance imaging (MRI) were enrolled, yielding 1090 annotated images for developing a deep learning model for placental segmentation. The images were standardized and divided into training and test sets. The performance of Placental Segmentation Network (PlaNet-S), which integrates U-Net and SegNeXt within an ensemble framework, was assessed using Intersection over Union (IoU) and counting connected components (CCC) against the U-Net, U-Net + + , and DS-transUNet. PlaNet-S had significantly higher IoU (0.78, SD = 0.10) than that of U-Net (0.73, SD = 0.13) (p < 0.005) and DS-transUNet (0.64, SD = 0.16) (p < 0.005), while the difference with U-Net + + (0.77, SD = 0.12) was not statistically significant. The CCC for PlaNet-S was significantly higher than that for U-Net (p < 0.005), U-Net + + (p < 0.005), and DS-transUNet (p < 0.005), matching the ground truth in 86.0%, 56.7%, 67.9%, and 20.9% of the cases, respectively. PlaNet-S achieved higher IoU than U-Net and DS-transUNet, and comparable IoU to U-Net + + . Moreover, PlaNet-S significantly outperformed all three models in CCC, indicating better agreement with the ground truth. This model addresses the challenges of time-consuming physician-assisted manual segmentation and offers the potential for diverse applications in placental imaging analyses.

Scalable Segmentation for Ultra-High-Resolution Brain MR Images

Xiaoling Hu, Peirong Liu, Dina Zemlyanker, Jonathan Williams Ramirez, Oula Puonti, Juan Eugenio Iglesias

arxiv logopreprintMay 27 2025
Although deep learning has shown great success in 3D brain MRI segmentation, achieving accurate and efficient segmentation of ultra-high-resolution brain images remains challenging due to the lack of labeled training data for fine-scale anatomical structures and high computational demands. In this work, we propose a novel framework that leverages easily accessible, low-resolution coarse labels as spatial references and guidance, without incurring additional annotation cost. Instead of directly predicting discrete segmentation maps, our approach regresses per-class signed distance transform maps, enabling smooth, boundary-aware supervision. Furthermore, to enhance scalability, generalizability, and efficiency, we introduce a scalable class-conditional segmentation strategy, where the model learns to segment one class at a time conditioned on a class-specific input. This novel design not only reduces memory consumption during both training and testing, but also allows the model to generalize to unseen anatomical classes. We validate our method through comprehensive experiments on both synthetic and real-world datasets, demonstrating its superior performance and scalability compared to conventional segmentation approaches.

An orchestration learning framework for ultrasound imaging: Prompt-Guided Hyper-Perception and Attention-Matching Downstream Synchronization.

Lin Z, Li S, Wang S, Gao Z, Sun Y, Lam CT, Hu X, Yang X, Ni D, Tan T

pubmed logopapersMay 27 2025
Ultrasound imaging is pivotal in clinical diagnostics due to its affordability, portability, safety, real-time capability, and non-invasive nature. It is widely utilized for examining various organs, such as the breast, thyroid, ovary, cardiac, and more. However, the manual interpretation and annotation of ultrasound images are time-consuming and prone to variability among physicians. While single-task artificial intelligence (AI) solutions have been explored, they are not ideal for scaling AI applications in medical imaging. Foundation models, although a trending solution, often struggle with real-world medical datasets due to factors such as noise, variability, and the incapability of flexibly aligning prior knowledge with task adaptation. To address these limitations, we propose an orchestration learning framework named PerceptGuide for general-purpose ultrasound classification and segmentation. Our framework incorporates a novel orchestration mechanism based on prompted hyper-perception, which adapts to the diverse inductive biases required by different ultrasound datasets. Unlike self-supervised pre-trained models, which require extensive fine-tuning, our approach leverages supervised pre-training to directly capture task-relevant features, providing a stronger foundation for multi-task and multi-organ ultrasound imaging. To support this research, we compiled a large-scale Multi-task, Multi-organ public ultrasound dataset (M<sup>2</sup>-US), featuring images from 9 organs and 16 datasets, encompassing both classification and segmentation tasks. Our approach employs four specific prompts-Object, Task, Input, and Position-to guide the model, ensuring task-specific adaptability. Additionally, a downstream synchronization training stage is introduced to fine-tune the model for new data, significantly improving generalization capabilities and enabling real-world applications. Experimental results demonstrate the robustness and versatility of our framework in handling multi-task and multi-organ ultrasound image processing, outperforming both specialist models and existing general AI solutions. Compared to specialist models, our method improves segmentation from 82.26% to 86.45%, classification from 71.30% to 79.08%, while also significantly reducing model parameters.
Page 13 of 31302 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.