Sort by:
Page 102 of 1331328 results

Vascular segmentation of functional ultrasound images using deep learning.

Sebia H, Guyet T, Pereira M, Valdebenito M, Berry H, Vidal B

pubmed logopapersJun 4 2025
Segmentation of medical images is a fundamental task with numerous applications. While MRI, CT, and PET modalities have significantly benefited from deep learning segmentation techniques, more recent modalities, like functional ultrasound (fUS), have seen limited progress. fUS is a non invasive imaging method that measures changes in cerebral blood volume (CBV) with high spatio-temporal resolution. However, distinguishing arterioles from venules in fUS is challenging due to opposing blood flow directions within the same pixel. Ultrasound localization microscopy (ULM) can enhance resolution by tracking microbubble contrast agents but is invasive, and lacks dynamic CBV quantification. In this paper, we introduce the first deep learning-based application for fUS image segmentation, capable of differentiating signals based on vertical flow direction (upward vs. downward), using ULM-based automatic annotation, and enabling dynamic CBV quantification. In the cortical vasculature, this distinction in flow direction provides a proxy for differentiating arteries from veins. We evaluate various UNet architectures on fUS images of rat brains, achieving competitive segmentation performance, with 90% accuracy, a 71% F1 score, and an IoU of 0.59, using only 100 temporal frames from a fUS stack. These results are comparable to those from tubular structure segmentation in other imaging modalities. Additionally, models trained on resting-state data generalize well to images captured during visual stimulation, highlighting robustness. Although it does not reach the full granularity of ULM, the proposed method provides a practical, non-invasive and cost-effective solution for inferring flow direction-particularly valuable in scenarios where ULM is not available or feasible. Our pipeline shows high linear correlation coefficients between signals from predicted and actual compartments, showcasing its ability to accurately capture blood flow dynamics.

Average Calibration Losses for Reliable Uncertainty in Medical Image Segmentation

Theodore Barfoot, Luis C. Garcia-Peraza-Herrera, Samet Akcay, Ben Glocker, Tom Vercauteren

arxiv logopreprintJun 4 2025
Deep neural networks for medical image segmentation are often overconfident, compromising both reliability and clinical utility. In this work, we propose differentiable formulations of marginal L1 Average Calibration Error (mL1-ACE) as an auxiliary loss that can be computed on a per-image basis. We compare both hard- and soft-binning approaches to directly improve pixel-wise calibration. Our experiments on four datasets (ACDC, AMOS, KiTS, BraTS) demonstrate that incorporating mL1-ACE significantly reduces calibration errors, particularly Average Calibration Error (ACE) and Maximum Calibration Error (MCE), while largely maintaining high Dice Similarity Coefficients (DSCs). We find that the soft-binned variant yields the greatest improvements in calibration, over the Dice plus cross-entropy loss baseline, but often compromises segmentation performance, with hard-binned mL1-ACE maintaining segmentation performance, albeit with weaker calibration improvement. To gain further insight into calibration performance and its variability across an imaging dataset, we introduce dataset reliability histograms, an aggregation of per-image reliability diagrams. The resulting analysis highlights improved alignment between predicted confidences and true accuracies. Overall, our approach not only enhances the trustworthiness of segmentation predictions but also shows potential for safer integration of deep learning methods into clinical workflows. We share our code here: https://github.com/cai4cai/Average-Calibration-Losses

UltraBones100k: A reliable automated labeling method and large-scale dataset for ultrasound-based bone surface extraction.

Wu L, Cavalcanti NA, Seibold M, Loggia G, Reissner L, Hein J, Beeler S, Viehöfer A, Wirth S, Calvet L, Fürnstahl P

pubmed logopapersJun 4 2025
Ultrasound-based bone surface segmentation is crucial in computer-assisted orthopedic surgery. However, ultrasound images have limitations, including a low signal-to-noise ratio, acoustic shadowing, and speckle noise, which make interpretation difficult. Existing deep learning models for bone segmentation rely primarily on costly manual labeling by experts, limiting dataset size and model generalizability. Additionally, the complexity of ultrasound physics and acoustic shadow makes the images difficult for humans to interpret, leading to incomplete labels in low-intensity and anechoic regions and limiting model performance. To advance the state-of-the-art in ultrasound bone segmentation and establish effective model benchmarks, larger and higher-quality datasets are needed. We propose a methodology for collecting ex-vivo ultrasound datasets with automatically generated bone labels, including anechoic regions. The proposed labels are derived by accurately superimposing tracked bone Computed Tomography (CT) models onto the tracked ultrasound images. These initial labels are refined to account for ultrasound physics. To clinically evaluate the proposed method, an expert physician from our university hospital specialized in orthopedic sonography assessed the quality of the generated bone labels. A neural network for bone segmentation is trained on the collected dataset and its predictions are compared to expert manual labels, evaluating accuracy, completeness, and F1-score. We collected UltraBones100k, the largest known dataset comprising 100k ex-vivo ultrasound images of human lower limbs with bone annotations, specifically targeting the fibula, tibia, and foot bones. A Wilcoxon signed-rank test with Bonferroni correction confirmed that the bone alignment after our optimization pipeline significantly improved the quality of bone labeling (p<0.001). The model trained on UltraBones100k consistently outperforms manual labeling in all metrics, particularly in low-intensity regions (at a distance threshold of 0.5 mm: 320% improvement in completeness, 27.4% improvement in accuracy, and 197% improvement in F1 score) CONCLUSION:: This work is promising to facilitate research and clinical translation of ultrasound imaging in computer-assisted interventions, particularly for applications such as 2D bone segmentation, 3D bone surface reconstruction, and multi-modality bone registration.

Advancing prenatal healthcare by explainable AI enhanced fetal ultrasound image segmentation using U-Net++ with attention mechanisms.

Singh R, Gupta S, Mohamed HG, Bharany S, Rehman AU, Ghadi YY, Hussen S

pubmed logopapersJun 4 2025
Prenatal healthcare development requires accurate automated techniques for fetal ultrasound image segmentation. This approach allows standardized evaluation of fetal development by minimizing time-exhaustive processes that perform poorly due to human intervention. This research develops a segmentation framework through U-Net++ with ResNet backbone features which incorporates attention components for enhancing extraction of features in low contrast, noisy ultrasound data. The model leverages the nested skip connections of U-Net++ and the residual learning of ResNet-34 to achieve state-of-the-art segmentation accuracy. Evaluations of the developed model against the vast fetal ultrasound image collection yielded superior results by reaching 97.52% Dice coefficient as well as 95.15% Intersection over Union (IoU), and 3.91 mm Hausdorff distance. The pipeline integrated Grad-CAM++ allows explanations of the model decisions for clinical utility and trust enhancement. The explainability component enables medical professionals to study how the model functions, which creates clear and proven segmentation outputs for better overall reliability. The framework fills in the gap between AI automation and clinical interpretability by showing important areas which affect predictions. The research shows that deep learning combined with Explainable AI (XAI) operates to generate medical imaging solutions that achieve high accuracy. The proposed system demonstrates readiness for clinical workflows due to its ability to deliver a sophisticated prenatal diagnostic instrument that enhances healthcare results.

AI-powered segmentation of bifid mandibular canals using CBCT.

Gumussoy I, Demirezer K, Duman SB, Haylaz E, Bayrakdar IS, Celik O, Syed AZ

pubmed logopapersJun 4 2025
Accurate segmentation of the mandibular and bifid canals is crucial in dental implant planning to ensure safe implant placement, third molar extractions and other surgical interventions. The objective of this study is to develop and validate an innovative artificial intelligence tool for the efficient, and accurate segmentation of the mandibular and bifid canals on CBCT. CBCT data were screened to identify patients with clearly visible bifid canal variations, and their DICOM files were extracted. These DICOM files were then imported into the 3D Slicer<sup>®</sup> open-source software, where bifid canals and mandibular canals were annotated. The annotated data, along with the raw DICOM files, were processed using the nnU-Netv2 training model by CranioCatch AI software team. 69 anonymized CBCT volumes in DICOM format were converted to NIfTI file format. The method, utilizing nnU-Net v2, accurately predicted the voxels associated with the mandibular canal, achieving an intersection of over 50% in nearly all samples. The accuracy, Dice score, precision, and recall scores for the mandibular canal/bifid canal were determined to be 0.99/0.99, 0.82/0.46, 0.85/0.70, and 0.80/0.42, respectively. Despite the bifid canal segmentation not meeting the expected level of success, the findings indicate that the proposed method shows promising and has the potential to be utilized as a supplementary tool for mandibular canal segmentation. Due to the significance of accurately evaluating the mandibular canal before surgery, the use of artificial intelligence could assist in reducing the burden on practitioners by automating the complicated and time-consuming process of tracing and segmenting this structure. Being able to distinguish bifid channels with artificial intelligence will help prevent neurovascular problems that may occur before or after surgery.

Digital removal of dermal denticle layer using geometric AI from 3D CT scans of shark craniofacial structures enhances anatomical precision.

Kim SW, Yuen AHL, Kim HW, Lee S, Lee SB, Lee YM, Jung WJ, Poon CTC, Park D, Kim S, Kim SG, Kang JW, Kwon J, Jo SJ, Giri SS, Park H, Seo JP, Kim DS, Kim BY, Park SC

pubmed logopapersJun 4 2025
Craniofacial morphometrics in sharks provide crucial insights into evolutionary history, geographical variation, sexual dimorphism, and developmental patterns. However, the fragile cartilaginous nature of shark craniofacial skeleton poses significant challenges for traditional specimen preparation, often resulting in damaged cranial landmarks and compromised measurement accuracy. While computed tomography (CT) offers a non-invasive alternative for anatomical observation, the high electron density of dermal denticles in sharks creates a unique challenge, obstructing clear visualization of internal structures in three-dimensional volume-rendered images (3DVRI). This study presents an artificial intelligence (AI)-based solution using machine-learning algorithms for digitally removing dermal denticle layer from CT scans of shark craniofacial skeleton. We developed a geometric AI-driven software (SKINPEELER) that selectively removes high-intensity voxels corresponding to dermal denticle layer while preserving underlying anatomical structures. We evaluated this approach using CT scans from 20 sharks (16 Carcharhinus brachyurus, 2 Alopias vulpinus, 1 Sphyrna lewini, and 1 Prionace glauca), applying our AI-driven software to process the Digital Imaging and Communications in Medicine (DICOM) images. The processed scans were reconstructed using bone reconstruction algorithms to enable precise craniofacial measurements. We assessed the accuracy of our method by comparing measurements from the processed 3DVRIs with traditional manual measurements. The AI-assisted approach demonstrated high accuracy (86.16-98.52%) relative to manual measurements. Additionally, we evaluated reproducibility and repeatability using intraclass correlation coefficients (ICC), finding high reproducibility (ICC: 0.456-0.998) and repeatability (ICC: 0.985-1.000 for operator 1 and 0.882-0.999 for operator 2). Our results indicate that this AI-enhanced digital denticle removal technique, combined with 3D CT reconstruction, provides a reliable and non-destructive alternative to traditional specimen preparation methods for investigating shark craniofacial morphology. This novel approach enhances measurement precision while preserving specimen integrity, potentially advancing various aspects of shark research including evolutionary studies, conservation efforts, and anatomical investigations.

Effect of contrast enhancement on diagnosis of interstitial lung abnormality in automatic quantitative CT measurement.

Choi J, Ahn Y, Kim Y, Noh HN, Do KH, Seo JB, Lee SM

pubmed logopapersJun 3 2025
To investigate the effect of contrast enhancement on the diagnosis of interstitial lung abnormalities (ILA) in automatic quantitative CT measurement in patients with paired pre- and post-contrast scans. Patients who underwent chest CT for thoracic surgery between April 2017 and December 2020 were retrospectively analyzed. ILA quantification was performed using deep learning-based automated software. Cases were categorized as ILA or non-ILA according to the Fleischner Society's definition, based on the quantification results or radiologist assessment (reference standard). Measurement variability, agreement, and diagnostic performance between the pre- and post-contrast scans were evaluated. In 1134 included patients, post-contrast scans quantified a slightly larger volume of nonfibrotic ILA (mean difference: -0.2%), due to increased ground-glass opacity and reticulation volumes (-0.2% and -0.1%), whereas the fibrotic ILA volume remained unchanged (0.0%). ILA was diagnosed in 15 (1.3%), 22 (1.9%), and 40 (3.5%) patients by pre- and post-contrast scans and radiologists, respectively. The agreement between the pre- and post-contrast scans was substantial (κ = 0.75), but both pre-contrast (κ = 0.46) and post-contrast (κ = 0.54) scans demonstrated moderate agreement with the radiologist. The sensitivity for ILA (32.5% vs. 42.5%, p = 0.221) and specificity for non-ILA (99.8% vs. 99.5%, p = 0.248) were comparable between pre- and post-contrast scans. Radiologist's reclassification for equivocal ILA due to unilateral abnormalities increased the sensitivity for ILA (67.5% and 75.0%, respectively) in both pre- and post-contrast scans. Applying automated quantification on post-contrast scans appears to be acceptable in terms of agreement and diagnostic performance; however, radiologists may need to improve sensitivity reclassifying equivocal ILA. Question The effect of contrast enhancement on the automated quantification of interstitial lung abnormality (ILA) remains unknown. Findings Automated quantification measured slightly larger ground-glass opacity and reticulation volumes on post-contrast scans than on pre-contrast scans; however, contrast enhancement did not affect the sensitivity for interstitial lung abnormality. Clinical relevance Applying automated quantification on post-contrast scans appears to be acceptable in terms of agreement and diagnostic performance.

Deep Learning Pipeline for Automated Assessment of Distances Between Tonsillar Tumors and the Internal Carotid Artery.

Jain A, Amanian A, Nagururu N, Creighton FX, Prisman E

pubmed logopapersJun 3 2025
Evaluating the minimum distance (dTICA) between the internal carotid artery (ICA) and tonsillar tumors (TT) on imaging is essential for preoperative planning; we propose a tool to automatically extract dTICA. CT scans of 96 patients with TT were selected from the cancer imaging archive. nnU-Net, a deep learning framework, was implemented to automatically segment both the TT and ICA from these scans. Dice similarity coefficient (DSC) and average hausdorff distance (AHD) were used to evaluate the performance of the nnU-Net. Thereafter, an automated tool was built to calculate the magnitude of dTICA from these segmentations. The average DSC and AHD were 0.67, 2.44 mm, and 0.83, 0.49 mm for the TT and ICA, respectively. The mean dTICA was 6.66 mm and statistically varied by tumor T stage (p = 0.00456). The proposed pipeline can accurately and automatically capture dTICA, potentially assisting clinicians in preoperative evaluation.

PARADIM: A Platform to Support Research at the Interface of Data Science and Medical Imaging.

Lemaréchal Y, Couture G, Pelletier F, Lefol R, Asselin PL, Ouellet S, Bernard J, Ebrahimpour L, Manem VSK, Topalis J, Schachtner B, Jodogne S, Joubert P, Jeblick K, Ingrisch M, Després P

pubmed logopapersJun 3 2025
This paper describes PARADIM, a digital infrastructure designed to support research at the interface of data science and medical imaging, with a focus on Research Data Management best practices. The platform is built from open-source components and rooted in the FAIR principles through strict compliance with the DICOM standard. It addresses key needs in data curation, governance, privacy, and scalable resource management. Supporting every stage of the data science discovery cycle, the platform offers robust functionalities for user identity and access management, data de-identification, storage, annotation, as well as model training and evaluation. Rich metadata are generated all along the research lifecycle to ensure the traceability and reproducibility of results. PARADIM hosts several medical image collections and allows the automation of large-scale, computationally intensive pipelines (e.g., automatic segmentation, dose calculations, AI model evaluation). The platform fills a gap at the interface of data science and medical imaging, where digital infrastructures are key in the development, evaluation, and deployment of innovative solutions in the real world.

Artificial intelligence vs human expertise: A comparison of plantar fascia thickness measurements through MRI imaging.

Alyanak B, Çakar İ, Dede BT, Yıldızgören MT, Bağcıer F

pubmed logopapersJun 3 2025
This study aims to evaluate the reliability of plantar fascia thickness measurements performed by ChatGPT-4 using magnetic resonance imaging (MRI) compared to those obtained by an experienced clinician. In this retrospective, single-center study, foot MRI images from the hospital archive were analysed. Plantar fascia thickness was measured under both blinded and non-blinded conditions by an experienced clinician and ChatGPT-4 at two separate time points. Measurement reliability was assessed using the intraclass correlation coefficient (ICC), mean absolute error (MAE), and mean relative error (MRE). A total of 41 participants (32 females, 9 males) were included. The average plantar fascia thickness measured by the clinician was 4.20 ± 0.80 mm and 4.25 ± 0.92 mm under blinded and non-blinded conditions, respectively, while ChatGPT-4's measurements were 6.47 ± 1.30 mm and 6.46 ± 1.31 mm, respectively. Human evaluators demonstrated excellent agreement (ICC = 0.983-0.989), whereas ChatGPT-4 exhibited low reliability (ICC = 0.391-0.432). In thin plantar fascia cases, ChatGPT-4's error rate was higher, with MAE = 2.70 mm, MRE = 77.17 % under blinded conditions, and MAE = 2.91 mm, MRE = 87.02 % under non-blinded conditions. ChatGPT-4 demonstrated lower reliability in plantar fascia thickness measurements compared to an experienced clinician, with increased error rates in thin structures. These findings highlight the limitations of AI-based models in medical image analysis and emphasize the need for further refinement before clinical implementation.
Page 102 of 1331328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.