Sort by:
Page 34 of 1421417 results

UltraBones100k: A reliable automated labeling method and large-scale dataset for ultrasound-based bone surface extraction.

Wu L, Cavalcanti NA, Seibold M, Loggia G, Reissner L, Hein J, Beeler S, Viehöfer A, Wirth S, Calvet L, Fürnstahl P

pubmed logopapersJun 4 2025
Ultrasound-based bone surface segmentation is crucial in computer-assisted orthopedic surgery. However, ultrasound images have limitations, including a low signal-to-noise ratio, acoustic shadowing, and speckle noise, which make interpretation difficult. Existing deep learning models for bone segmentation rely primarily on costly manual labeling by experts, limiting dataset size and model generalizability. Additionally, the complexity of ultrasound physics and acoustic shadow makes the images difficult for humans to interpret, leading to incomplete labels in low-intensity and anechoic regions and limiting model performance. To advance the state-of-the-art in ultrasound bone segmentation and establish effective model benchmarks, larger and higher-quality datasets are needed. We propose a methodology for collecting ex-vivo ultrasound datasets with automatically generated bone labels, including anechoic regions. The proposed labels are derived by accurately superimposing tracked bone Computed Tomography (CT) models onto the tracked ultrasound images. These initial labels are refined to account for ultrasound physics. To clinically evaluate the proposed method, an expert physician from our university hospital specialized in orthopedic sonography assessed the quality of the generated bone labels. A neural network for bone segmentation is trained on the collected dataset and its predictions are compared to expert manual labels, evaluating accuracy, completeness, and F1-score. We collected UltraBones100k, the largest known dataset comprising 100k ex-vivo ultrasound images of human lower limbs with bone annotations, specifically targeting the fibula, tibia, and foot bones. A Wilcoxon signed-rank test with Bonferroni correction confirmed that the bone alignment after our optimization pipeline significantly improved the quality of bone labeling (p<0.001). The model trained on UltraBones100k consistently outperforms manual labeling in all metrics, particularly in low-intensity regions (at a distance threshold of 0.5 mm: 320% improvement in completeness, 27.4% improvement in accuracy, and 197% improvement in F1 score) CONCLUSION:: This work is promising to facilitate research and clinical translation of ultrasound imaging in computer-assisted interventions, particularly for applications such as 2D bone segmentation, 3D bone surface reconstruction, and multi-modality bone registration.

Vascular segmentation of functional ultrasound images using deep learning.

Sebia H, Guyet T, Pereira M, Valdebenito M, Berry H, Vidal B

pubmed logopapersJun 4 2025
Segmentation of medical images is a fundamental task with numerous applications. While MRI, CT, and PET modalities have significantly benefited from deep learning segmentation techniques, more recent modalities, like functional ultrasound (fUS), have seen limited progress. fUS is a non invasive imaging method that measures changes in cerebral blood volume (CBV) with high spatio-temporal resolution. However, distinguishing arterioles from venules in fUS is challenging due to opposing blood flow directions within the same pixel. Ultrasound localization microscopy (ULM) can enhance resolution by tracking microbubble contrast agents but is invasive, and lacks dynamic CBV quantification. In this paper, we introduce the first deep learning-based application for fUS image segmentation, capable of differentiating signals based on vertical flow direction (upward vs. downward), using ULM-based automatic annotation, and enabling dynamic CBV quantification. In the cortical vasculature, this distinction in flow direction provides a proxy for differentiating arteries from veins. We evaluate various UNet architectures on fUS images of rat brains, achieving competitive segmentation performance, with 90% accuracy, a 71% F1 score, and an IoU of 0.59, using only 100 temporal frames from a fUS stack. These results are comparable to those from tubular structure segmentation in other imaging modalities. Additionally, models trained on resting-state data generalize well to images captured during visual stimulation, highlighting robustness. Although it does not reach the full granularity of ULM, the proposed method provides a practical, non-invasive and cost-effective solution for inferring flow direction-particularly valuable in scenarios where ULM is not available or feasible. Our pipeline shows high linear correlation coefficients between signals from predicted and actual compartments, showcasing its ability to accurately capture blood flow dynamics.

Subgrouping autism and ADHD based on structural MRI population modelling centiles.

Pecci-Terroba C, Lai MC, Lombardo MV, Chakrabarti B, Ruigrok ANV, Suckling J, Anagnostou E, Lerch JP, Taylor MJ, Nicolson R, Georgiades S, Crosbie J, Schachar R, Kelley E, Jones J, Arnold PD, Seidlitz J, Alexander-Bloch AF, Bullmore ET, Baron-Cohen S, Bedford SA, Bethlehem RAI

pubmed logopapersJun 4 2025
Autism and attention deficit hyperactivity disorder (ADHD) are two highly heterogeneous neurodevelopmental conditions with variable underlying neurobiology. Imaging studies have yielded varied results, and it is now clear that there is unlikely to be one characteristic neuroanatomical profile of either condition. Parsing this heterogeneity could allow us to identify more homogeneous subgroups, either within or across conditions, which may be more clinically informative. This has been a pivotal goal for neurodevelopmental research using both clinical and neuroanatomical features, though results thus far have again been inconsistent with regards to the number and characteristics of subgroups. Here, we use population modelling to cluster a multi-site dataset based on global and regional centile scores of cortical thickness, surface area and grey matter volume. We use HYDRA, a novel semi-supervised machine learning algorithm which clusters based on differences to controls and compare its performance to a traditional clustering approach. We identified distinct subgroups within autism and ADHD, as well as across diagnosis, often with opposite neuroanatomical alterations relatively to controls. These subgroups were characterised by different combinations of increased or decreased patterns of morphometrics. We did not find significant clinical differences across subgroups. Crucially, however, the number of subgroups and their membership differed vastly depending on chosen features and the algorithm used, highlighting the impact and importance of careful method selection. We highlight the importance of examining heterogeneity in autism and ADHD and demonstrate that population modelling is a useful tool to study subgrouping in autism and ADHD. We identified subgroups with distinct patterns of alterations relative to controls but note that these results rely heavily on the algorithm used and encourage detailed reporting of methods and features used in future studies.

Computed tomography-based radiomics model for predicting station 4 lymph node metastasis in non-small cell lung cancer.

Kang Y, Li M, Xing X, Qian K, Liu H, Qi Y, Liu Y, Cui Y, Zhang H

pubmed logopapersJun 4 2025
This study aimed to develop and validate machine learning models for preoperative identification of metastasis to station 4 mediastinal lymph nodes (MLNM) in non-small cell lung cancer (NSCLC) patients at pathological N0-N2 (pN0-pN2) stage, thereby enhancing the precision of clinical decision-making. We included a total of 356 NSCLC patients at pN0-pN2 stage, divided into training (n = 207), internal test (n = 90), and independent test (n = 59) sets. Station 4 mediastinal lymph nodes (LNs) regions of interest (ROIs) were semi-automatically segmented on venous-phase computed tomography (CT) images for radiomics feature extraction. Using least absolute shrinkage and selection operator (LASSO) regression to select features with non-zero coefficients. Four machine learning algorithms-decision tree (DT), logistic regression (LR), random forest (RF), and support vector machine (SVM)-were employed to construct radiomics models. Clinical predictors were identified through univariate and multivariate logistic regression, which were subsequently integrated with radiomics features to develop combined models. Models performance were evaluated using receiver operating characteristic (ROC) analysis, calibration curves, decision curve analysis (DCA), and DeLong's test. Out of 1721 radiomics features, eight radiomics features were selected using LASSO regression. The RF-based combined model exhibited the strongest discriminative power, with an area under the curve (AUC) of 0.934 for the training set and 0.889 for the internal test set. The calibration curve and DCA further indicated the superior performance of the combined model based on RF. The independent test set further verified the model's robustness. The combined model based on RF, integrating radiomics and clinical features, effectively and non-invasively identifies metastasis to the station 4 mediastinal LNs in NSCLC patients at pN0-pN2 stage. This model serves as an effective auxiliary tool for clinical decision-making and has the potential to optimize treatment strategies and improve prognostic assessment for pN0-pN2 patients. Not applicable.

Deep learning-based cone-beam CT motion compensation with single-view temporal resolution.

Maier J, Sawall S, Arheit M, Paysan P, Kachelrieß M

pubmed logopapersJun 4 2025
Cone-beam CT (CBCT) scans that are affected by motion often require motion compensation to reduce artifacts or to reconstruct 4D (3D+time) representations of the patient. To do so, most existing strategies rely on some sort of gating strategy that sorts the acquired projections into motion bins. Subsequently, these bins can be reconstructed individually before further post-processing may be applied to improve image quality. While this concept is useful for periodic motion patterns, it fails in case of non-periodic motion as observed, for example, in irregularly breathing patients. To address this issue and to increase temporal resolution, we propose the deep single angle-based motion compensation (SAMoCo). To avoid gating, and therefore its downsides, the deep SAMoCo trains a U-net-like network to predict displacement vector fields (DVFs) representing the motion that occurred between any two given time points of the scan. To do so, 4D clinical CT scans are used to simulate 4D CBCT scans as well as the corresponding ground truth DVFs that map between the different motion states of the scan. The network is then trained to predict these DVFs as a function of the respective projection views and an initial 3D reconstruction. Once the network is trained, an arbitrary motion state corresponding to a certain projection view of the scan can be recovered by estimating DVFs from any other state or view and by considering them during reconstruction. Applied to 4D CBCT simulations of breathing patients, the deep SAMoCo provides high-quality reconstructions for periodic and non-periodic motion. Here, the deviations with respect to the ground truth are less than 27 HU on average, while respiratory motion, or the diaphragm position, can be resolved with an accuracy of about 0.75 mm. Similar results were obtained for real measurements where a high correlation with external motion monitoring signals could be observed, even in patients with highly irregular respiration. The ability to estimate DVFs as a function of two arbitrary projection views and an initial 3D reconstruction makes deep SAMoCo applicable to arbitrary motion patterns with single-view temporal resolution. Therefore, the deep SAMoCo is particularly useful for cases with unsteady breathing, compensation of residual motion during a breath-hold scan, or scans with fast gantry rotation times in which the data acquisition only covers a very limited number of breathing cycles. Furthermore, not requiring gating signals may simplify the clinical workflow and reduces the time needed for patient preparation.

Deep learning based rapid X-ray fluorescence signal extraction and image reconstruction for preclinical benchtop X-ray fluorescence computed tomography applications.

Kaphle A, Jayarathna S, Cho SH

pubmed logopapersJun 4 2025
Recent research advances have resulted in an experimental benchtop X-ray fluorescence computed tomography (XFCT) system that likely meets the imaging dose/scan time constraints for benchtop XFCT imaging of live mice injected with gold nanoparticles (GNPs). For routine in vivo benchtop XFCT imaging, however, additional challenges, most notably the need for rapid/near-real-time handling of X-ray fluorescence (XRF) signal extraction and XFCT image reconstruction, must be successfully addressed. Here we propose a novel end-to-end deep learning (DL) framework that integrates a one-dimensional convolutional neural network (1D CNN) for rapid XRF signal extraction with a U-Net model for XFCT image reconstruction. We trained the models using a comprehensive dataset including experimentally-acquired and augmented XRF/scatter photon spectra from various GNP concentrations and imaging scenarios, including phantom and synthetic mouse models. The DL framework demonstrated exceptional performance in both tasks. The 1D CNN achieved a high coefficient-of-determination (R² > 0.9885) and a low mean-absolute-error (MAE < 0.6248) in XRF signal extraction. The U-Net model achieved an average structural-similarity-index-measure (SSIM) of 0.9791 and a peak signal-to-noise ratio (PSNR) of 39.11 in XFCT image reconstruction, closely matching ground truth images. Notably, the DL approach (vs. the conventional approach) reduced the total post-processing time per slice from approximately 6 min to just 1.25 s.

AI-powered segmentation of bifid mandibular canals using CBCT.

Gumussoy I, Demirezer K, Duman SB, Haylaz E, Bayrakdar IS, Celik O, Syed AZ

pubmed logopapersJun 4 2025
Accurate segmentation of the mandibular and bifid canals is crucial in dental implant planning to ensure safe implant placement, third molar extractions and other surgical interventions. The objective of this study is to develop and validate an innovative artificial intelligence tool for the efficient, and accurate segmentation of the mandibular and bifid canals on CBCT. CBCT data were screened to identify patients with clearly visible bifid canal variations, and their DICOM files were extracted. These DICOM files were then imported into the 3D Slicer<sup>®</sup> open-source software, where bifid canals and mandibular canals were annotated. The annotated data, along with the raw DICOM files, were processed using the nnU-Netv2 training model by CranioCatch AI software team. 69 anonymized CBCT volumes in DICOM format were converted to NIfTI file format. The method, utilizing nnU-Net v2, accurately predicted the voxels associated with the mandibular canal, achieving an intersection of over 50% in nearly all samples. The accuracy, Dice score, precision, and recall scores for the mandibular canal/bifid canal were determined to be 0.99/0.99, 0.82/0.46, 0.85/0.70, and 0.80/0.42, respectively. Despite the bifid canal segmentation not meeting the expected level of success, the findings indicate that the proposed method shows promising and has the potential to be utilized as a supplementary tool for mandibular canal segmentation. Due to the significance of accurately evaluating the mandibular canal before surgery, the use of artificial intelligence could assist in reducing the burden on practitioners by automating the complicated and time-consuming process of tracing and segmenting this structure. Being able to distinguish bifid channels with artificial intelligence will help prevent neurovascular problems that may occur before or after surgery.

Latent space reconstruction for missing data problems in CT.

Kabelac A, Eulig E, Maier J, Hammermann M, Knaup M, Kachelrieß M

pubmed logopapersJun 4 2025
The reconstruction of a computed tomography (CT) image can be compromised by artifacts, which, in many cases, reduce the diagnostic value of the image. These artifacts often result from missing or corrupt regions in the projection data, for example, by truncation, metal, or limited angle acquisitions. In this work, we introduce a novel deep learning-based framework, latent space reconstruction (LSR), which enables correction of various types of artifacts arising from missing or corrupted data. First, we train a generative neural network on uncorrupted CT images. After training, we iteratively search for the point in the latent space of this network that best matches the compromised projection data we measured. Once an optimal point is found, forward-projection of the generated CT image can be used to inpaint the corrupted or incomplete regions of the measured raw data. We used LSR to correct for truncation and metal artifacts. For the truncation artifact correction, images corrected by LSR show effective artifact suppression within the field of measurement (FOM), alongside a substantial high-quality extension of the FOM compared to other methods. For the metal artifact correction, images corrected by LSR demonstrate effective artifact reduction, providing a clearer view of the surrounding tissues and anatomical details. The results indicate that LSR is effective in correcting metal and truncation artifacts. Furthermore, the versatility of LSR allows its application to various other types of artifacts resulting from missing or corrupt data.

Advancing prenatal healthcare by explainable AI enhanced fetal ultrasound image segmentation using U-Net++ with attention mechanisms.

Singh R, Gupta S, Mohamed HG, Bharany S, Rehman AU, Ghadi YY, Hussen S

pubmed logopapersJun 4 2025
Prenatal healthcare development requires accurate automated techniques for fetal ultrasound image segmentation. This approach allows standardized evaluation of fetal development by minimizing time-exhaustive processes that perform poorly due to human intervention. This research develops a segmentation framework through U-Net++ with ResNet backbone features which incorporates attention components for enhancing extraction of features in low contrast, noisy ultrasound data. The model leverages the nested skip connections of U-Net++ and the residual learning of ResNet-34 to achieve state-of-the-art segmentation accuracy. Evaluations of the developed model against the vast fetal ultrasound image collection yielded superior results by reaching 97.52% Dice coefficient as well as 95.15% Intersection over Union (IoU), and 3.91 mm Hausdorff distance. The pipeline integrated Grad-CAM++ allows explanations of the model decisions for clinical utility and trust enhancement. The explainability component enables medical professionals to study how the model functions, which creates clear and proven segmentation outputs for better overall reliability. The framework fills in the gap between AI automation and clinical interpretability by showing important areas which affect predictions. The research shows that deep learning combined with Explainable AI (XAI) operates to generate medical imaging solutions that achieve high accuracy. The proposed system demonstrates readiness for clinical workflows due to its ability to deliver a sophisticated prenatal diagnostic instrument that enhances healthcare results.

Digital removal of dermal denticle layer using geometric AI from 3D CT scans of shark craniofacial structures enhances anatomical precision.

Kim SW, Yuen AHL, Kim HW, Lee S, Lee SB, Lee YM, Jung WJ, Poon CTC, Park D, Kim S, Kim SG, Kang JW, Kwon J, Jo SJ, Giri SS, Park H, Seo JP, Kim DS, Kim BY, Park SC

pubmed logopapersJun 4 2025
Craniofacial morphometrics in sharks provide crucial insights into evolutionary history, geographical variation, sexual dimorphism, and developmental patterns. However, the fragile cartilaginous nature of shark craniofacial skeleton poses significant challenges for traditional specimen preparation, often resulting in damaged cranial landmarks and compromised measurement accuracy. While computed tomography (CT) offers a non-invasive alternative for anatomical observation, the high electron density of dermal denticles in sharks creates a unique challenge, obstructing clear visualization of internal structures in three-dimensional volume-rendered images (3DVRI). This study presents an artificial intelligence (AI)-based solution using machine-learning algorithms for digitally removing dermal denticle layer from CT scans of shark craniofacial skeleton. We developed a geometric AI-driven software (SKINPEELER) that selectively removes high-intensity voxels corresponding to dermal denticle layer while preserving underlying anatomical structures. We evaluated this approach using CT scans from 20 sharks (16 Carcharhinus brachyurus, 2 Alopias vulpinus, 1 Sphyrna lewini, and 1 Prionace glauca), applying our AI-driven software to process the Digital Imaging and Communications in Medicine (DICOM) images. The processed scans were reconstructed using bone reconstruction algorithms to enable precise craniofacial measurements. We assessed the accuracy of our method by comparing measurements from the processed 3DVRIs with traditional manual measurements. The AI-assisted approach demonstrated high accuracy (86.16-98.52%) relative to manual measurements. Additionally, we evaluated reproducibility and repeatability using intraclass correlation coefficients (ICC), finding high reproducibility (ICC: 0.456-0.998) and repeatability (ICC: 0.985-1.000 for operator 1 and 0.882-0.999 for operator 2). Our results indicate that this AI-enhanced digital denticle removal technique, combined with 3D CT reconstruction, provides a reliable and non-destructive alternative to traditional specimen preparation methods for investigating shark craniofacial morphology. This novel approach enhances measurement precision while preserving specimen integrity, potentially advancing various aspects of shark research including evolutionary studies, conservation efforts, and anatomical investigations.
Page 34 of 1421417 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.