Sort by:
Page 47 of 1341332 results

Coronary CT angiography evaluation with artificial intelligence for individualized medical treatment of atherosclerosis: a Consensus Statement from the QCI Study Group.

Schulze K, Stantien AM, Williams MC, Vassiliou VS, Giannopoulos AA, Nieman K, Maurovich-Horvat P, Tarkin JM, Vliegenthart R, Weir-McCall J, Mohamed M, Föllmer B, Biavati F, Stahl AC, Knape J, Balogh H, Galea N, Išgum I, Arbab-Zadeh A, Alkadhi H, Manka R, Wood DA, Nicol ED, Nurmohamed NS, Martens FMAC, Dey D, Newby DE, Dewey M

pubmed logopapersAug 1 2025
Coronary CT angiography is widely implemented, with an estimated 2.2 million procedures in patients with stable chest pain every year in Europe alone. In parallel, artificial intelligence and machine learning are poised to transform coronary atherosclerotic plaque evaluation by improving reliability and speed. However, little is known about how to use coronary atherosclerosis imaging biomarkers to individualize recommendations for medical treatment. This Consensus Statement from the Quantitative Cardiovascular Imaging (QCI) Study Group outlines key recommendations derived from a three-step Delphi process that took place after the third international QCI Study Group meeting in September 2024. Experts from various fields of cardiovascular imaging agreed on the use of age-adjusted and gender-adjusted percentile curves, based on coronary plaque data from the DISCHARGE and SCOT-HEART trials. Two key issues were addressed: the need to harness the reliability and precision of artificial intelligence and machine learning tools and to tailor treatment on the basis of individualized plaque analysis. The QCI Study Group recommends that the presence of any atherosclerotic plaque should lead to a recommendation of pharmacological treatment, whereas the 70th percentile of total plaque volume warrants high-intensity treatment. The aim of these recommendations is to lay the groundwork for future trials and to unlock the potential of coronary CT angiography to improve patient outcomes globally.

Lumbar and pelvic CT image segmentation based on cross-scale feature fusion and linear self-attention mechanism.

Li C, Chen L, Liu Q, Teng J

pubmed logopapersAug 1 2025
The lumbar spine and pelvis are critical stress-bearing structures of the human body, and their rapid and accurate segmentation plays a vital role in clinical diagnosis and intervention. However, conventional CT imaging poses significant challenges due to the low contrast of sacral and bilateral hip tissues and the complex and highly similar intervertebral space structures within the lumbar spine. To address these challenges, we propose a general-purpose segmentation network that integrates a cross-scale feature fusion strategy with a linear self-attention mechanism. The proposed network effectively extracts multi-scale features and fuses them along the channel dimension, enabling both structural and boundary information of lumbar and pelvic regions to be captured within the encoder-decoder architecture.Furthermore, we introduce a linear mapping strategy to approximate the traditional attention matrix with a low-rank representation, allowing the linear attention mechanism to significantly reduce computational complexity while maintaining segmentation accuracy for vertebrae and pelvic bones. Comparative and ablation experiments conducted on the CTSpine1K and CTPelvic1K datasets demonstrate that our method achieves improvements of 1.5% in Dice Similarity Coefficient (DSC) and 2.6% in Hausdorff Distance (HD) over state-of-the-art models, validating the effectiveness of our approach in enhancing boundary segmentation quality and segmentation accuracy in homogeneous anatomical regions.

Deep learning model for automated segmentation of sphenoid sinus and middle skull base structures in CBCT volumes using nnU-Net v2.

Gülşen İT, Kuran A, Evli C, Baydar O, Dinç Başar K, Bilgir E, Çelik Ö, Bayrakdar İŞ, Orhan K, Acu B

pubmed logopapersAug 1 2025
The purpose of this study is the development of a deep learning model based on nnU-Net v2 for the automated segmentation of sphenoid sinus and middle skull base anatomic structures in cone-beam computed tomography (CBCT) volumes, followed by an evaluation of the model's performance. In this retrospective study, the sphenoid sinus and surrounding anatomical structures in 99 CBCT scans were annotated using web-based labeling software. Model training was conducted using the nnU-Net v2 deep learning model with a learning rate of 0.01 for 1000 epochs. The performance of the model in automatically segmenting these anatomical structures in CBCT scans was evaluated using a series of metrics, including accuracy, precision, recall, dice coefficient (DC), 95% Hausdorff distance (95% HD), intersection on union (IoU), and AUC. The developed deep learning model demonstrated a high level of success in segmenting sphenoid sinus, foramen rotundum, and Vidian canal. Upon evaluation of the DC values, it was observed that the model demonstrated the highest degree of ability to segment the sphenoid sinus, with a DC value of 0.96. The nnU-Net v2-based deep learning model achieved high segmentation performance for the sphenoid sinus, foramen rotundum, and Vidian canal within the middle skull base, with the highest DC observed for the sphenoid sinus (DC: 0.96). However, the model demonstrated limited performance in segmenting other foramina of the middle skull base, indicating the need for further optimization for these structures.

Enhanced Detection of Age-Related and Cognitive Declines Using Automated Hippocampal-To-Ventricle Ratio in Alzheimer's Patients.

Fernandez-Lozano S, Fonov V, Schoemaker D, Pruessner J, Potvin O, Duchesne S, Collins DL

pubmed logopapersAug 1 2025
The hippocampal-to-ventricle ratio (HVR) is a biomarker of medial temporal atrophy, particularly useful in the assessment of neurodegeneration in diseases such as Alzheimer's disease (AD). To minimize subjectivity and inter-rater variability, an automated, accurate, precise, and reliable segmentation technique for the hippocampus (HC) and surrounding cerebro-spinal fluid (CSF) filled spaces-such as the temporal horns of the lateral ventricles-is essential. We trained and evaluated three automated methods for the segmentation of both HC and CSF (Multi-Atlas Label Fusion (MALF), Nonlinear Patch-Based Segmentation (NLPB), and a Convolutional Neural Network (CNN)). We then evaluated these methods, including the widely used FreeSurfer technique, using baseline T1w MRIs of 1641 participants from the AD Neuroimaging Initiative study with various degree of atrophy associated with their cognitive status on the spectrum from cognitively healthy to clinically probable AD. Our gold standard consisted in manual segmentation of HC and CSF from 80 cognitively healthy individuals. We calculated HC volumes and HVR and compared all methods in terms of segmentation reliability, similarity across methods, sensitivity in detecting between-group differences and associations with age, scores of the learning subtest of the Rey Auditory Verbal Learning Test (RAVLT) and the Alzheimer's Disease Assessment Scale 13 (ADAS13) scores. Cross validation demonstrated that the CNN method yielded more accurate HC and CSF segmentations when compared to MALF and NLPB, demonstrating higher volumetric overlap (Dice Kappa = 0.94) and correlation (rho = 0.99) with the manual labels. It was also the most reliable method in clinical data application, showing minimal failures. Our comparisons yielded high correlations between FreeSurfer, CNN and NLPB volumetric values. HVR yielded higher control:AD effect sizes than HC volumes among all segmentation methods, reinforcing the significance of HVR in clinical distinction. The positive association with age was significantly stronger for HVR compared to HC volumes on all methods except FreeSurfer. Memory associations with HC volumes or HVR were only significant for individuals with mild cognitive impairment. Finally, the HC volumes and HVR showed comparable negative associations with ADAS13, particularly in the mild cognitive impairment cohort. This study provides an evaluation of automated segmentation methods centered to estimate HVR, emphasizing the superior performance of a CNN-based algorithm. The findings underscore the pivotal role of accurate segmentation in HVR calculations for precise clinical applications, contributing valuable insights into medial temporal lobe atrophy in neurodegenerative disorders, especially AD.

Reference charts for first-trimester placental volume derived using OxNNet.

Mathewlynn S, Starck LN, Yin Y, Soltaninejad M, Swinburne M, Nicolaides KH, Syngelaki A, Contreras AG, Bigiotti S, Woess EM, Gerry S, Collins S

pubmed logopapersAug 1 2025
To establish a comprehensive reference range for OxNNet-derived first-trimester placental volume (FTPV), based on values observed in healthy pregnancies. Data were obtained from the First Trimester Placental Ultrasound Study, an observational cohort study in which three-dimensional placental ultrasound imaging was performed between 11 + 2 and 14 + 1 weeks' gestation, alongside otherwise routine care. A subgroup of singleton pregnancies resulting in term live birth, without neonatal unit admission or major chromosomal or structural abnormality, were included. Exclusion criteria were fetal growth restriction, maternal diabetes mellitus, hypertensive disorders of pregnancy or other maternal medical conditions (e.g. chronic hypertension, antiphospholipid syndrome, systemic lupus erythematosus). Placental images were processed using the OxNNet toolkit, a software solution based on a fully convolutional neural network, for automated placental segmentation and volume calculation. Quantile regression and the lambda-mu-sigma (LMS) method were applied to model the distribution of FTPV, using both crown-rump length (CRL) and gestational age as predictors. Model fit was assessed using the Akaike information criterion (AIC), and centile curves were constructed for visual inspection. The cohort comprised 2547 cases. The distribution of FTPV across gestational ages was positively skewed, with variation in the distribution at different gestational timepoints. In model comparisons, the LMS method yielded lower AIC values compared with quantile regression models. For predicting FTPV from CRL, the LMS model with the Sinh-Arcsinh distribution achieved the best performance, with the lowest AIC value. For gestational-age-based prediction, the LMS model with the Box-Cox Cole and Green original distribution achieved the lowest AIC value. The LMS models were selected to construct centile charts for FTPV based on both CRL and gestational age. Evaluation of the centile charts revealed strong agreement between predicted and observed centiles, with minimal deviations. Both models demonstrated excellent calibration, and the Z-scores derived using each of the models confirmed normal distribution. This study established reference ranges for FTPV based on both CRL and gestational age in healthy pregnancies. The LMS method provided the best model fit, demonstrating excellent calibration and minimal deviations between predicted and observed centiles. These findings should facilitate the exploration of FTPV as a potential biomarker for adverse pregnancy outcome and provide a foundation for future research into its clinical applications. © 2025 The Author(s). Ultrasound in Obstetrics & Gynecology published by John Wiley & Sons Ltd on behalf of International Society of Ultrasound in Obstetrics and Gynecology.

Utility of artificial intelligence in radiosurgery for pituitary adenoma: a deep learning-based automated segmentation model and evaluation of its clinical applicability.

Černý M, May J, Hamáčková L, Hallak H, Novotný J, Baručić D, Kybic J, May M, Májovský M, Link MJ, Balasubramaniam N, Síla D, Babničová M, Netuka D, Liščák R

pubmed logopapersAug 1 2025
The objective of this study was to develop a deep learning model for automated pituitary adenoma segmentation in MRI scans for stereotactic radiosurgery planning and to assess its accuracy and efficiency in clinical settings. An nnU-Net-based model was trained on MRI scans with expert segmentations of 582 patients treated with Leksell Gamma Knife over the course of 12 years. The accuracy of the model was evaluated by a human expert on a separate dataset of 146 previously unseen patients. The primary outcome was the comparison of expert ratings between the predicted segmentations and a control group consisting of original manual segmentations. Secondary outcomes were the influence of tumor volume, previous surgery, previous stereotactic radiosurgery (SRS), and endocrinological status on expert ratings, performance in a subgroup of nonfunctioning macroadenomas (measuring 1000-4000 mm3) without previous surgery and/or radiosurgery, and influence of using additional MRI modalities as model input and time cost reduction. The model achieved Dice similarity coefficients of 82.3%, 63.9%, and 79.6% for tumor, normal gland, and optic nerve, respectively. A human expert rated 20.6% of the segmentations as applicable in treatment planning without any modifications, 52.7% as applicable with minor manual modifications, and 26.7% as inapplicable. The ratings for predicted segmentations were lower than for the control group of original segmentations (p < 0.001). Larger tumor volume, history of a previous radiosurgery, and nonfunctioning pituitary adenoma were associated with better expert ratings (p = 0.005, p = 0.007, and p < 0.001, respectively). In the subgroup without previous surgery, although expert ratings were more favorable, the association did not reach statistical significance (p = 0.074). In the subgroup of noncomplex cases (n = 9), 55.6% of the segmentations were rated as applicable without any manual modifications and no segmentations were rated as inapplicable. Manually improving inaccurate segmentations instead of creating them from scratch led to 53.6% reduction of the time cost (p < 0.001). The results were applicable for treatment planning with either no or minor manual modifications, demonstrating a significant increase in the efficiency of the planning process. The predicted segmentations can be loaded into the planning software used in clinical practice for treatment planning. The authors discuss some considerations of the clinical utility of the automated segmentation models, as well as their integration within established clinical workflows, and outline directions for future research.

From Consensus to Standardization: Evaluating Deep Learning for Nerve Block Segmentation in Ultrasound Imaging.

Pelletier ED, Jeffries SD, Suissa N, Sarty I, Malka N, Song K, Sinha A, Hemmerling TM

pubmed logopapersAug 1 2025
Deep learning can automate nerve identification by learning from expert-labeled examples to detect and highlight nerves in ultrasound images. This study aims to evaluate the performance of deep-learning models in identifying nerves for ultrasound-guided nerve blocks. A total of 3594 raw ultrasound images were collected from public sources-an open GitHub repository and publicly available YouTube videos-covering 9 nerve block regions: Transversus Abdominis Plane (TAP), Femoral Nerve, Posterior Rectus Sheath, Median and Ulnar Nerves, Pectoralis Plane, Sciatic Nerve, Infraclavicular Brachial Plexus, Supraclavicular Brachial Plexus, and Interscalene Brachial Plexus. Of these, 10 images per nerve region were kept for testing, with each image labeled by 10 expert anesthesiologists. The remaining 3504 were labeled by a medical anesthesia resident and augmented to create a diverse training dataset of 25,000 images per nerve region. Additionally, 908 negative ultrasound images, which do not contain the targeted nerve structures, were included to improve model robustness. Ten convolutional neural network-based deep-learning architectures were selected to identify nerve structures. Models were trained using a 5-fold cross-validation approach on an Extended Video Graphics Array (EVGA) GeForce RTX 3090 GPU, with batch size, number of epochs, and the Adam optimizer adjusted to enhance the models' effectiveness. Posttraining, models were evaluated on a set of 10 images per nerve region, using the Dice score (range: 0 to 1, where 1 indicates perfect agreement and 0 indicates no overlap) to compare model predictions with expert-labeled images. Further validation was conducted by 10 medical experts who assessed whether they would insert a needle into the model's predictions. Statistical analyses were performed to explore the relationship between Dice scores and expert responses. The R2U-Net model achieved the highest average Dice score (0.7619) across all nerve regions, outperforming other models (0.7123-0.7619). However, statistically significant differences in model performance were observed only for the TAP nerve region (χ² = 26.4, df = 9, P = .002, ε² = 0.267). Expert evaluations indicated high accuracy in the model predictions, particularly for the Popliteal nerve region, where experts agreed to insert a needle based on all 100 model-generated predictions. Logistic modeling suggested that higher Dice overlap might increase the odds of expert acceptance in the Supraclavicular region (odds ratio [OR] = 8.59 × 10⁴, 95% confidence interval [CI], 0.33-2.25 × 10¹⁰; P = .073). The findings demonstrate the potential of deep-learning models, such as R2U-Net, to deliver consistent segmentation results in ultrasound-guided nerve block procedures.

IHE-Net:Hidden feature discrepancy fusion and triple consistency training for semi-supervised medical image segmentation.

Ju M, Wang B, Zhao Z, Zhang S, Yang S, Wei Z

pubmed logopapersJul 31 2025
Teacher-Student (TS) networks have become the mainstream frameworks of semi-supervised deep learning, and are widely used in medical image segmentation. However, traditional TSs based on single or homogeneous encoders often struggle to capture the rich semantic details required for complex, fine-grained tasks. To address this, we propose a novel semi-supervised medical image segmentation framework (IHE-Net), which makes good use of the feature discrepancies of two heterogeneous encoders to improve segmentation performance. The two encoders are instantiated by different learning paradigm networks, namely CNN and Transformer/Mamba, respectively, to extract richer and more robust context representations from unlabeled data. On this basis, we propose a simple yet powerful multi-level feature discrepancy fusion module (MFDF), which effectively integrates different modal features and their discrepancies from two heterogeneous encoders. This design enhances the representational capacity of the model through efficient fusion without introducing additional computational overhead. Furthermore, we introduce a triple consistency learning strategy to improve predictive stability by setting dual decoders and adding mixed output consistency. Extensive experimental results on three skin lesion segmentation datasets, ISIC2017, ISIC2018, and PH2, demonstrate the superiority of our framework. Ablation studies further validate the rationale and effectiveness of the proposed method. Code is available at: https://github.com/joey-AI-medical-learning/IHE-Net.

Topology Optimization in Medical Image Segmentation with Fast Euler Characteristic

Liu Li, Qiang Ma, Cheng Ouyang, Johannes C. Paetzold, Daniel Rueckert, Bernhard Kainz

arxiv logopreprintJul 31 2025
Deep learning-based medical image segmentation techniques have shown promising results when evaluated based on conventional metrics such as the Dice score or Intersection-over-Union. However, these fully automatic methods often fail to meet clinically acceptable accuracy, especially when topological constraints should be observed, e.g., continuous boundaries or closed surfaces. In medical image segmentation, the correctness of a segmentation in terms of the required topological genus sometimes is even more important than the pixel-wise accuracy. Existing topology-aware approaches commonly estimate and constrain the topological structure via the concept of persistent homology (PH). However, these methods are difficult to implement for high dimensional data due to their polynomial computational complexity. To overcome this problem, we propose a novel and fast approach for topology-aware segmentation based on the Euler Characteristic ($\chi$). First, we propose a fast formulation for $\chi$ computation in both 2D and 3D. The scalar $\chi$ error between the prediction and ground-truth serves as the topological evaluation metric. Then we estimate the spatial topology correctness of any segmentation network via a so-called topological violation map, i.e., a detailed map that highlights regions with $\chi$ errors. Finally, the segmentation results from the arbitrary network are refined based on the topological violation maps by a topology-aware correction network. Our experiments are conducted on both 2D and 3D datasets and show that our method can significantly improve topological correctness while preserving pixel-wise segmentation accuracy.

Topology Optimization in Medical Image Segmentation with Fast Euler Characteristic

Liu Li, Qiang Ma, Cheng Ouyang, Johannes C. Paetzold, Daniel Rueckert, Bernhard Kainz

arxiv logopreprintJul 31 2025
Deep learning-based medical image segmentation techniques have shown promising results when evaluated based on conventional metrics such as the Dice score or Intersection-over-Union. However, these fully automatic methods often fail to meet clinically acceptable accuracy, especially when topological constraints should be observed, e.g., continuous boundaries or closed surfaces. In medical image segmentation, the correctness of a segmentation in terms of the required topological genus sometimes is even more important than the pixel-wise accuracy. Existing topology-aware approaches commonly estimate and constrain the topological structure via the concept of persistent homology (PH). However, these methods are difficult to implement for high dimensional data due to their polynomial computational complexity. To overcome this problem, we propose a novel and fast approach for topology-aware segmentation based on the Euler Characteristic ($\chi$). First, we propose a fast formulation for $\chi$ computation in both 2D and 3D. The scalar $\chi$ error between the prediction and ground-truth serves as the topological evaluation metric. Then we estimate the spatial topology correctness of any segmentation network via a so-called topological violation map, i.e., a detailed map that highlights regions with $\chi$ errors. Finally, the segmentation results from the arbitrary network are refined based on the topological violation maps by a topology-aware correction network. Our experiments are conducted on both 2D and 3D datasets and show that our method can significantly improve topological correctness while preserving pixel-wise segmentation accuracy.
Page 47 of 1341332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.