Sort by:
Page 77 of 1341332 results

Embryonic cranial cartilage defects in the Fgfr3<sup>Y367C</sup> <sup>/+</sup> mouse model of achondroplasia.

Motch Perrine SM, Sapkota N, Kawasaki K, Zhang Y, Chen DZ, Kawasaki M, Durham EL, Heuzé Y, Legeai-Mallet L, Richtsmeier JT

pubmed logopapersJul 1 2025
Achondroplasia, the most common chondrodysplasia in humans, is caused by one of two gain of function mutations localized in the transmembrane domain of fibroblast growth factor receptor 3 (FGFR3) leading to constitutive activation of FGFR3 and subsequent growth plate cartilage and bone defects. Phenotypic features of achondroplasia include macrocephaly with frontal bossing, midface hypoplasia, disproportionate shortening of the extremities, brachydactyly with trident configuration of the hand, and bowed legs. The condition is defined primarily on postnatal effects on bone and cartilage, and embryonic development of tissues in affected individuals is not well studied. Using the Fgfr3<sup>Y367C/+</sup> mouse model of achondroplasia, we investigated the developing chondrocranium and Meckel's cartilage (MC) at embryonic days (E)14.5 and E16.5. Sparse hand annotations of chondrocranial and MC cartilages visualized in phosphotungstic acid enhanced three-dimensional (3D) micro-computed tomography (microCT) images were used to train our automatic deep learning-based 3D segmentation model and produce 3D isosurfaces of the chondrocranium and MC. Using 3D coordinates of landmarks measured on the 3D isosurfaces, we quantified differences in the chondrocranium and MC of Fgfr3<sup>Y367C/+</sup> mice relative to those of their unaffected littermates. Statistically significant differences in morphology and growth of the chondrocranium and MC were found, indicating direct effects of this Fgfr3 mutation on embryonic cranial and pharyngeal cartilages, which in turn can secondarily affect cranial dermal bone development. Our results support the suggestion that early therapeutic intervention during cartilage formation may lessen the effects of this condition.

Deep learning algorithm enables automated Cobb angle measurements with high accuracy.

Hayashi D, Regnard NE, Ventre J, Marty V, Clovis L, Lim L, Nitche N, Zhang Z, Tournier A, Ducarouge A, Kompel AJ, Tannoury C, Guermazi A

pubmed logopapersJul 1 2025
To determine the accuracy of automatic Cobb angle measurements by deep learning (DL) on full spine radiographs. Full spine radiographs of patients aged > 2 years were screened using the radiology reports to identify radiographs for performing Cobb angle measurements. Two senior musculoskeletal radiologists and one senior orthopedic surgeon independently annotated Cobb angles exceeding 7° indicating the angle location as either proximal thoracic (apices between T3 and T5), main thoracic (apices between T6 and T11), or thoraco-lumbar (apices between T12 and L4). If at least two readers agreed on the number of angles, location of the angles, and difference between comparable angles was < 8°, then the ground truth was defined as the mean of their measurements. Otherwise, the radiographs were reviewed by the three annotators in consensus. The DL software (BoneMetrics, Gleamer) was evaluated against the manual annotation in terms of mean absolute error (MAE). A total of 345 patients were included in the study (age 33 ± 24 years, 221 women): 179 pediatric patients (< 22 years old) and 166 adult patients (22 to 85 years old). Fifty-three cases were reviewed in consensus. The MAE of the DL algorithm for the main curvature was 2.6° (95% CI [2.0; 3.3]). For the subgroup of pediatric patients, the MAE was 1.9° (95% CI [1.6; 2.2]) versus 3.3° (95% CI [2.2; 4.8]) for adults. The DL algorithm predicted the Cobb angle of scoliotic patients with high accuracy.

Predicting progression-free survival in sarcoma using MRI-based automatic segmentation models and radiomics nomograms: a preliminary multicenter study.

Zhu N, Niu F, Fan S, Meng X, Hu Y, Han J, Wang Z

pubmed logopapersJul 1 2025
Some sarcomas are highly malignant, associated with high recurrence despite treatment. This multicenter study aimed to develop and validate a radiomics signature to estimate sarcoma progression-free survival (PFS). The study retrospectively enrolled 202 consecutive patients with pathologically diagnosed sarcoma, who had pre-treatment axial fat-suppressed T2-weighted images (FS-T2WI), and included them in the ROI-Net model for training. Among them, 120 patients were included in the radiomics analysis, all of whom had pre-treatment axial T1-weighted and transverse FS-T2WI images, and were randomly divided into a development group (n = 96) and a validation group (n = 24). In the development cohort, Least Absolute Shrinkage and Selection Operator (LASSO) Cox regression was used to develop the radiomics features for PFS prediction. By combining significant clinical features with radiomics features, a nomogram was constructed using Cox regression. The proposed ROI-Net framework achieved a Dice coefficient of 0.820 (0.791-0.848). The radiomics signature based on 21 features could distinguish high-risk patients with poor PFS. Univariate Cox analysis revealed that peritumoral edema, metastases, and the radiomics score were associated with poor PFS and were included in the construction of the nomogram. The Radiomics-T1WI-Clinical model exhibited the best performance, with AUC values of 0.947, 0.907, and 0.924 at 300 days, 600 days, and 900 days, respectively. The proposed ROI-Net framework demonstrated high consistency between its segmentation results and expert annotations. The radiomics features and the combined nomogram have the potential to aid in predicting PFS for patients with sarcoma.

Generalizability, robustness, and correction bias of segmentations of thoracic organs at risk in CT images.

Guérendel C, Petrychenko L, Chupetlovska K, Bodalal Z, Beets-Tan RGH, Benson S

pubmed logopapersJul 1 2025
This study aims to assess and compare two state-of-the-art deep learning approaches for segmenting four thoracic organs at risk (OAR)-the esophagus, trachea, heart, and aorta-in CT images in the context of radiotherapy planning. We compare a multi-organ segmentation approach and the fusion of multiple single-organ models, each dedicated to one OAR. All were trained using nnU-Net with the default parameters and the full-resolution configuration. We evaluate their robustness with adversarial perturbations, and their generalizability on external datasets, and explore potential biases introduced by expert corrections compared to fully manual delineations. The two approaches show excellent performance with an average Dice score of 0.928 for the multi-class setting and 0.930 when fusing the four single-organ models. The evaluation of external datasets and common procedural adversarial noise demonstrates the good generalizability of these models. In addition, expert corrections of both models show significant bias to the original automated segmentation. The average Dice score between the two corrections is 0.93, ranging from 0.88 for the trachea to 0.98 for the heart. Both approaches demonstrate excellent performance and generalizability in segmenting four thoracic OARs, potentially improving efficiency in radiotherapy planning. However, the multi-organ setting proves advantageous for its efficiency, requiring less training time and fewer resources, making it a preferable choice for this task. Moreover, corrections of AI segmentation by clinicians may lead to biases in the results of AI approaches. A test set, manually annotated, should be used to assess the performance of such methods. Question While manual delineation of thoracic organs at risk is labor-intensive, prone to errors, and time-consuming, evaluation of AI models performing this task lacks robustness. Findings The deep-learning model using the nnU-Net framework showed excellent performance, generalizability, and robustness in segmenting thoracic organs in CT, enhancing radiotherapy planning efficiency. Clinical relevance Automatic segmentation of thoracic organs at risk can save clinicians time without compromising the quality of the delineations, and extensive evaluation across diverse settings demonstrates the potential of integrating such models into clinical practice.

Repeatability of AI-based, automatic measurement of vertebral and cardiovascular imaging biomarkers in low-dose chest CT: the ImaLife cohort.

Hamelink I, van Tuinen M, Kwee TC, van Ooijen PMA, Vliegenthart R

pubmed logopapersJul 1 2025
To evaluate the repeatability of AI-based automatic measurement of vertebral and cardiovascular markers on low-dose chest CT. We included participants of the population-based Imaging in Lifelines (ImaLife) study with low-dose chest CT at baseline and 3-4 month follow-up. An AI system (AI-Rad Companion chest CT prototype) performed automatic segmentation and quantification of vertebral height and density, aortic diameters, heart volume (cardiac chambers plus pericardial fat), and coronary artery calcium volume (CACV). A trained researcher visually checked segmentation accuracy. We evaluated the repeatability of adequate AI-based measurements at baseline and repeat scan using Intraclass Correlation Coefficient (ICC), relative differences, and change in CACV risk categorization, assuming no physiological change. Overall, 632 participants (63 ± 11 years; 56.6% men) underwent short-term repeat CT (mean interval, 3.9 ± 1.8 months). Visual assessment showed adequate segmentation in both baseline and repeat scan for 98.7% of vertebral measurements, 80.1-99.4% of aortic measurements (except for the sinotubular junction (65.2%)), and 86.0% of CACV. For heart volume, 53.5% of segmentations were adequate at baseline and repeat scans. ICC for adequately segmented cases showed excellent agreement for all biomarkers (ICC > 0.9). Relative difference between baseline and repeat measurements was < 4% for vertebral and aortic measurements, 7.5% for heart volume, and 28.5% for CACV. There was high concordance in CACV risk categorization (81.2%). In low-dose chest CT, segmentation accuracy of AI-based software was high for vertebral, aortic, and CACV evaluation and relatively low for heart volume. There was excellent repeatability of vertebral and aortic measurements and high concordance in overall CACV risk categorization. Question Can AI algorithms for opportunistic screening in chest CT obtain an accurate and repeatable result when applied to multiple CT scans of the same participant? Findings Vertebral and aortic analysis showed accurate segmentation and excellent repeatability; coronary calcium segmentation was generally accurate but showed modest repeatability due to a non-electrocardiogram-triggered protocol. Clinical relevance Opportunistic screening for diseases outside the primary purpose of the CT scan is time-consuming. AI allows automated vertebral, aortic, and coronary artery calcium (CAC) assessment, with highly repeatable outcomes of vertebral and aortic biomarkers and high concordance in overall CAC categorization.

Visualizing Preosteoarthritis: Updates on UTE-Based Compositional MRI and Deep Learning Algorithms.

Sun D, Wu G, Zhang W, Gharaibeh NM, Li X

pubmed logopapersJul 1 2025
Osteoarthritis (OA) is heterogeneous and involves structural changes in the whole joint, such as cartilage, meniscus/labrum, ligaments, and tendons, mainly with short T2 relaxation times. Detecting OA before the onset of irreversible changes is crucial for early proactive management and limit growing disease burden. The more recent advanced quantitative imaging techniques and deep learning (DL) algorithms in musculoskeletal imaging have shown great potential for visualizing "pre-OA." In this review, we first focus on ultrashort echo time-based magnetic resonance imaging (MRI) techniques for direct visualization as well as quantitative morphological and compositional assessment of both short- and long-T2 musculoskeletal tissues, and second explore how DL revolutionize the way of MRI analysis (eg, automatic tissue segmentation and extraction of quantitative image biomarkers) and the classification, prediction, and management of OA. PLAIN LANGUAGE SUMMARY: Detecting osteoarthritis (OA) before the onset of irreversible changes is crucial for early proactive management. OA is heterogeneous and involves structural changes in the whole joint, such as cartilage, meniscus/labrum, ligaments, and tendons, mainly with short T2 relaxation times. Ultrashort echo time-based magnetic resonance imaging (MRI), in particular, enables direct visualization and quantitative compositional assessment of short-T2 tissues. Deep learning is revolutionizing the way of MRI analysis (eg, automatic tissue segmentation and extraction of quantitative image biomarkers) and the detection, classification, and prediction of disease. They together have made further advances toward identification of imaging biomarkers/features for pre-OA. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 2.

A superpixel based self-attention network for uterine fibroid segmentation in high intensity focused ultrasound guidance images.

Wen S, Zhang D, Lei Y, Yang Y

pubmed logopapersJul 1 2025
Ultrasound guidance images are widely used for high intensity focused ultrasound (HIFU) therapy; however, the speckles, acoustic shadows, and signal attenuation in ultrasound guidance images hinder the observation of the images by radiologists and make segmentation of ultrasound guidance images more difficult. To address these issues, we proposed the superpixel based attention network, a network integrating superpixels and self-attention mechanisms that can automatically segment tumor regions in ultrasound guidance images. The method is implemented based on the framework of region splitting and merging. The ultrasound guidance image is first over-segmented into superpixels, then features within the superpixels are extracted and encoded into superpixel feature matrices with the uniform size. The network takes superpixel feature matrices and their positional information as input, and classifies superpixels using self-attention modules and convolutional layers. Finally, the superpixels are merged based on the classification results to obtain the tumor region, achieving automatic tumor region segmentation. The method was applied to a local dataset consisting of 140 ultrasound guidance images from uterine fibroid HIFU therapy. The performance of the proposed method was quantitatively evaluated by comparing the segmentation results with those of the pixel-wise segmentation networks. The proposed method achieved 75.95% and 7.34% in mean intersection over union (IoU) and mean normalized Hausdorff distance (NormHD). In comparison to the segmentation transformer (SETR), this represents an improvement in performance by 5.52% for IoU and 1.49% for NormHD. Paired t-tests were conducted to evaluate the significant difference in IoU and NormHD between the proposed method and the comparison methods. All p-values of the paired t-tests were found to be less than 0.05. The analysis of evaluation metrics and segmentation results indicates that the proposed method performs better than existing pixel-wise segmentation networks in segmenting the tumor region on ultrasound guidance images.

Attention residual network for medical ultrasound image segmentation.

Liu H, Zhang P, Hu J, Huang Y, Zuo S, Li L, Liu M, She C

pubmed logopapersJul 1 2025
Ultrasound imaging can distinctly display the morphology and structure of internal organs within the human body, enabling the examination of organs like the breast, liver, and thyroid. It can identify the locations of tumors, nodules, and other lesions, thereby serving as an efficacious tool for treatment detection and rehabilitation evaluation. Typically, the attending physician is required to manually demarcate the boundaries of lesion locations, such as tumors, in ultrasound images. Nevertheless, several issues exist. The high noise level in ultrasound images, the degradation of image quality due to the impact of surrounding tissues, and the influence of the operator's experience and proficiency on the determination of lesion locations can all contribute to a reduction in the accuracy of delineating the boundaries of lesion sites. In the wake of the advancement of deep learning, its application in medical image segmentation is becoming increasingly prevalent. For instance, while the U-Net model has demonstrated a favorable performance in medical image segmentation, the convolution layers of the traditional U-Net model are relatively simplistic, leading to suboptimal extraction of global information. Moreover, due to the significant noise present in ultrasound images, the model is prone to interference. In this research, we propose an Attention Residual Network model (ARU-Net). By incorporating residual connections within the encoder section, the learning capacity of the model is enhanced. Additionally, a spatial hybrid convolution module is integrated to augment the model's ability to extract global information and deepen the vertical architecture of the network. During the feature fusion stage of the skip connections, a channel attention mechanism and a multi-convolutional self-attention mechanism are respectively introduced to suppress noisy points within the fused feature maps, enabling the model to acquire more information regarding the target region. Finally, the predictive efficacy of the model was evaluated using publicly accessible breast ultrasound and thyroid ultrasound data. The ARU-Net achieved mean Intersection over Union (mIoU) values of 82.59% and 84.88%, accuracy values of 97.53% and 96.09%, and F1-score values of 90.06% and 89.7% for breast and thyroid ultrasound, respectively.

AI-based CT assessment of 3117 vertebrae reveals significant sex-specific vertebral height differences.

Palm V, Thangamani S, Budai BK, Skornitzke S, Eckl K, Tong E, Sedaghat S, Heußel CP, von Stackelberg O, Engelhardt S, Kopytova T, Norajitra T, Maier-Hein KH, Kauczor HU, Wielpütz MO

pubmed logopapersJul 1 2025
Predicting vertebral height is complex due to individual factors. AI-based medical imaging analysis offers new opportunities for vertebral assessment. Thereby, these novel methods may contribute to sex-adapted nomograms and vertebral height prediction models, aiding in diagnosing spinal conditions like compression fractures and supporting individualized, sex-specific medicine. In this study an AI-based CT-imaging spine analysis of 262 subjects (mean age 32.36 years, range 20-54 years) was conducted, including a total of 3117 vertebrae, to assess sex-associated anatomical variations. Automated segmentations provided anterior, central, and posterior vertebral heights. Regression analysis with a cubic spline linear mixed-effects model was adapted to age, sex, and spinal segments. Measurement reliability was confirmed by two readers with an intraclass correlation coefficient (ICC) of 0.94-0.98. Female vertebral heights were consistently smaller than males (p < 0.05). The largest differences were found in the upper thoracic spine (T1-T6), with mean differences of 7.9-9.0%. Specifically, T1 and T2 showed differences of 8.6% and 9.0%, respectively. The strongest height increase between consecutive vertebrae was observed from T9 to L1 (mean slope of 1.46; 6.63% for females and 1.53; 6.48% for males). This study highlights significant sex-based differences in vertebral heights, resulting in sex-adapted nomograms that can enhance diagnostic accuracy and support individualized patient assessments.

Improved segmentation of hepatic vascular networks in ultrasound volumes using 3D U-Net with intensity transformation-based data augmentation.

Takahashi Y, Sugino T, Onogi S, Nakajima Y, Masuda K

pubmed logopapersJul 1 2025
Accurate three-dimensional (3D) segmentation of hepatic vascular networks is crucial for supporting ultrasound-mediated theranostics for liver diseases. Despite advancements in deep learning techniques, accurate segmentation remains challenging due to ultrasound image quality issues, including intensity and contrast fluctuations. This study introduces intensity transformation-based data augmentation methods to improve deep convolutional neural network-based segmentation of hepatic vascular networks. We employed a 3D U-Net, which leverages spatial contextual information, as the baseline. To address intensity and contrast fluctuations and improve 3D U-Net performance, we implemented data augmentation using high-contrast intensity transformation with S-shaped tone curves and low-contrast intensity transformation with Gamma and inverse S-shaped tone curves. We conducted validation experiments on 78 ultrasound volumes to evaluate the effect of both geometric and intensity transformation-based data augmentations. We found that high-contrast intensity transformation-based data augmentation decreased segmentation accuracy, while low-contrast intensity transformation-based data augmentation significantly improved Recall and Dice. Additionally, combining geometric and low-contrast intensity transformation-based data augmentations, through an OR operation on their results, further enhanced segmentation accuracy, achieving improvements of 9.7% in Recall and 3.3% in Dice. This study demonstrated the effectiveness of low-contrast intensity transformation-based data augmentation in improving volumetric segmentation of hepatic vascular networks from ultrasound volumes.
Page 77 of 1341332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.