Sort by:
Page 204 of 3963955 results

Zero-shot segmentation of spinal vertebrae with metastatic lesions: an analysis of Meta's Segment Anything Model 2 and factors affecting learning free segmentation.

Khazanchi R, Govind S, Jain R, Du R, Dahdaleh NS, Ahuja CS, El Tecle N

pubmed logopapersJul 1 2025
Accurate vertebral segmentation is an important step in imaging analysis pipelines for diagnosis and subsequent treatment of spinal metastases. Segmenting these metastases is especially challenging given their radiological heterogeneity. Conventional approaches for segmenting vertebrae have included manual review or deep learning; however, manual review is time-intensive with interrater reliability issues, while deep learning requires large datasets to build. The rise of generative AI, notably tools such as Meta's Segment Anything Model 2 (SAM 2), holds promise in its ability to rapidly generate segmentations of any image without pretraining (zero-shot). The authors of this study aimed to assess the ability of SAM 2 to segment vertebrae with metastases. A publicly available set of spinal CT scans from The Cancer Imaging Archive was used, which included patient sex, BMI, vertebral locations, types of metastatic lesion (lytic, blastic, or mixed), and primary cancer type. Ground-truth segmentations for each vertebra, derived by neuroradiologists, were further extracted from the dataset. SAM 2 then produced segmentations for each vertebral slice without any training data, all of which were compared to gold standard segmentations using the Dice similarity coefficient (DSC). Relative performance differences were assessed across clinical subgroups using standard statistical techniques. Imaging data were extracted for 55 patients and 779 unique thoracolumbar vertebrae, 167 of which had metastatic tumor involvement. Across these vertebrae, SAM 2 had a mean volumetric DSC of 0.833 ± 0.053. SAM 2 performed significantly worse on thoracic vertebrae relative to lumbar vertebrae, female patients relative to male patients, and obese patients relative to non-obese patients. These results demonstrate that general-purpose segmentation models like SAM 2 can provide reasonable vertebral segmentation accuracy with no pretraining, with efficacy comparable to previously published trained models. Future research should include optimizations of spine segmentation models for vertebral location and patient body habitus, as well as for variations in imaging quality approaches.

The value of machine learning based on spectral CT quantitative parameters in the distinguishing benign from malignant thyroid micro-nodules.

Song Z, Liu Q, Huang J, Zhang D, Yu J, Zhou B, Ma J, Zou Y, Chen Y, Tang Z

pubmed logopapersJul 1 2025
More cases of thyroid micro-nodules have been diagnosed annually in recent years because of advancements in diagnostic technologies and increased public health awareness. To explore the application value of various machine learning (ML) algorithms based on dual-layer spectral computed tomography (DLCT) quantitative parameters in distinguishing benign from malignant thyroid micro-nodules. All 338 thyroid micro-nodules (177 malignant micro-nodules and 161 benign micro-nodules) were randomly divided into a training cohort (n = 237) and a testing cohort (n = 101) at a ratio of 7:3. Four typical radiological features and 19 DLCT quantitative parameters in the arterial phase and venous phase were measured. Recursive feature elimination was employed for variable selection. Three ML algorithms-support vector machine (SVM), logistic regression (LR), and naive Bayes (NB)-were implemented to construct predictive models. Predictive performance was evaluated via receiver operating characteristic (ROC) curve analysis. A variable set containing 6 key variables with "one standard error" rules was identified in the SVM model, which performed well in the training and testing cohorts (area under the ROC curve (AUC): 0.924 and 0.931, respectively). A variable set containing 2 key variables was identified in the NB model, which performed well in the training and testing cohorts (AUC: 0.882 and 0.899, respectively). A variable set containing 8 key variables was identified in the LR model, which performed well in the training and testing cohorts (AUC: 0.924 and 0.925, respectively). And nine ML models were developed with varying variable sets (2, 6, or 8 variables), all of which consistently achieved AUC values above 0.85 in the training, cross validation (CV)-Training, CV-Validation, and testing cohorts. Artificial intelligence-based DLCT quantitative parameters are promising for distinguishing benign from malignant thyroid micro-nodules.

Radiomics Analysis of Different Machine Learning Models based on Multiparametric MRI to Identify Benign and Malignant Testicular Lesions.

Jian Y, Yang S, Liu R, Tan X, Zhao Q, Wu J, Chen Y

pubmed logopapersJul 1 2025
To develop and validate a machine learning-based prediction model for the use of multiparametric magnetic resonance imaging(MRI) to predict benign and malignant lesions in the testis. The study retrospectively enrolled 148 patients with pathologically confirmed benign and malignant testicular lesions, dividing them into: training set (n=103) and validation set (n=45). Radiomics characteristics were derived from T2-weighted(T2WI)、contrast-enhanced T1-weighted(CE-T1WI)、diffusion-weighted imaging(DWI) and Apparent diffusion coefficient(ADC) MRI images, followed by feature selection. A machine learning-based combined model was developed by incorporating radiomics scores (rad scores) from the optimal radiomics model along with clinical predictors. Draw the receiver operating characteristic (ROC) curve and use the area under the curve (AUC) to evaluate and compare the predictive performance of each model. The diagnostic efficacy of the various machine learning models was evaluated using the Delong test. Radiomics features were extracted from four sequence-based groups(CE-T1WI+DWI+ADC+T2WI), and the model that combined Logistic Regression(LR) machine learning showed the best performance in the radiomics model. The clinical model identified one independent predictors. The combined clinical-radiomics model showed the best performance, whose AUC value was 0.932(95% confidence intervals(CI)0.868-0.978), sensitivity was 0.875, specificity was 0.871 and accuracy was 0.884 in validation set. The combined clinical-radiomics model can be used as a reliable tool to predict benign and malignant testicular lesions and provide a reference for clinical treatment method decisions.

External Validation of an Artificial Intelligence Algorithm Using Biparametric MRI and Its Simulated Integration with Conventional PI-RADS for Prostate Cancer Detection.

Belue MJ, Mukhtar V, Ram R, Gokden N, Jose J, Massey JL, Biben E, Buddha S, Langford T, Shah S, Harmon SA, Turkbey B, Aydin AM

pubmed logopapersJul 1 2025
Prostate imaging reporting and data systems (PI-RADS) experiences considerable variability in inter-reader performance. Artificial Intelligence (AI) algorithms were suggested to provide comparable performance to PI-RADS for assessing prostate cancer (PCa) risk, albeit tested in highly selected cohorts. This study aimed to assess an AI algorithm for PCa detection in a clinical practice setting and simulate integration of the AI model with PI-RADS for assessment of indeterminate PI-RADS 3 lesions. This retrospective cohort study externally validated a biparametric MRI-based AI model for PCa detection in a consecutive cohort of patients who underwent prostate MRI and subsequently targeted and systematic prostate biopsy at a urology clinic between January 2022 and March 2024. Radiologist interpretations followed PI-RADS v2.1, and biopsies were conducted per PI-RADS scores. The previously developed AI model provided lesion segmentations and cancer probability maps which were compared to biopsy results. Additionally, we conducted a simulation to adjust biopsy thresholds for index PI-RADS category 3 studies, where AI predictions within these studies upgraded them to PI-RADS category 4. Among 144 patients with a median age of 70 years and PSA density of 0.17ng/mL/cc, AI's sensitivity for detection of PCa (86.6%) and clinically significant PCa (csPCa, 88.4%) was comparable to radiologists (85.7%, p=0.84, and 89.5%, p=0.80, respectively). The simulation combining radiologist and AI evaluations improved clinically significant PCa sensitivity by 5.8% (p=0.025). The combination of AI, PI-RADS and PSA density provided the best diagnostic performance for csPCa (area under the curve [AUC]=0.76). The AI algorithm demonstrated comparable PCa detection rates to PI-RADS. The combination of AI with radiologist interpretation improved sensitivity and could be instrumental in assessment of low-risk and indeterminate PI-RADS lesions. The role of AI in PCa screening remains to be further elucidated.

Embryonic cranial cartilage defects in the Fgfr3<sup>Y367C</sup> <sup>/+</sup> mouse model of achondroplasia.

Motch Perrine SM, Sapkota N, Kawasaki K, Zhang Y, Chen DZ, Kawasaki M, Durham EL, Heuzé Y, Legeai-Mallet L, Richtsmeier JT

pubmed logopapersJul 1 2025
Achondroplasia, the most common chondrodysplasia in humans, is caused by one of two gain of function mutations localized in the transmembrane domain of fibroblast growth factor receptor 3 (FGFR3) leading to constitutive activation of FGFR3 and subsequent growth plate cartilage and bone defects. Phenotypic features of achondroplasia include macrocephaly with frontal bossing, midface hypoplasia, disproportionate shortening of the extremities, brachydactyly with trident configuration of the hand, and bowed legs. The condition is defined primarily on postnatal effects on bone and cartilage, and embryonic development of tissues in affected individuals is not well studied. Using the Fgfr3<sup>Y367C/+</sup> mouse model of achondroplasia, we investigated the developing chondrocranium and Meckel's cartilage (MC) at embryonic days (E)14.5 and E16.5. Sparse hand annotations of chondrocranial and MC cartilages visualized in phosphotungstic acid enhanced three-dimensional (3D) micro-computed tomography (microCT) images were used to train our automatic deep learning-based 3D segmentation model and produce 3D isosurfaces of the chondrocranium and MC. Using 3D coordinates of landmarks measured on the 3D isosurfaces, we quantified differences in the chondrocranium and MC of Fgfr3<sup>Y367C/+</sup> mice relative to those of their unaffected littermates. Statistically significant differences in morphology and growth of the chondrocranium and MC were found, indicating direct effects of this Fgfr3 mutation on embryonic cranial and pharyngeal cartilages, which in turn can secondarily affect cranial dermal bone development. Our results support the suggestion that early therapeutic intervention during cartilage formation may lessen the effects of this condition.

A Deep Learning Approach for Nerve Injury Classification in Brachial Plexopathies Using Magnetic Resonance Neurography with Modified Hiking Optimization Algorithm.

Dahou A, Elaziz MA, Khattap MG, Hassan HGEMA

pubmed logopapersJul 1 2025
Brachial plexopathies (BPs) encompass a complex spectrum of nerve injuries affecting motor and sensory function in the upper extremities. Diagnosis is challenging due to the intricate anatomy and symptom overlap with other neuropathies. Magnetic Resonance Neurography (MRN) provides advanced imaging but requires specialized interpretation. This study proposes an AI-based framework that combines deep learning (DL) with the modified Hiking Optimization Algorithm (MHOA) enhanced by a Comprehensive Learning (CL) technique to improve the classification of nerve injuries (neuropraxia, axonotmesis, neurotmesis) using MRN data. The framework utilizes MobileNetV4 for feature extraction and MHOA for optimized feature selection across different MRI sequences (STIR, T2, T1, and DWI). A dataset of 39 patients diagnosed with BP was used. The framework classifies injuries based on Seddon's criteria, distinguishing between normal and abnormal conditions as well as injury severity. The model achieved excellent performance, with 1.0000 accuracy in distinguishing normal from abnormal conditions using STIR and T2 sequences. For injury severity classification, accuracy was 0.9820 in STIR, outperforming the original HOA and other metaheuristic algorithms. Additionally, high classification accuracy (0.9667) was observed in DWI. The proposed framework outperformed traditional methods and demonstrated high sensitivity and specificity. The proposed AI-based framework significantly improves the diagnosis of BP by accurately classifying nerve injury types. By integrating DL and optimization techniques, it reduces diagnostic variability, making it a valuable tool for clinical settings with limited specialized neuroimaging expertise. This framework has the potential to enhance clinical decision-making and optimize patient outcomes through precise and timely diagnoses.

A Longitudinal Analysis of Pre- and Post-Operative Dysmorphology in Metopic Craniosynostosis.

Beiriger JW, Tao W, Irgebay Z, Smetona J, Dvoracek L, Kass NM, Dixon A, Zhang C, Mehta M, Whitaker R, Goldstein JA

pubmed logopapersJul 1 2025
The purpose of this study is to objectively quantify the degree of overcorrection in our current practice and to evaluate longitudinal morphological changes using CranioRate<sup>TM</sup>, a novel machine learning skull morphology assessment tool.  Design:Retrospective cohort study across multiple time points. Tertiary care children's hospital. Patients with preoperative and postoperative CT scans who underwent fronto-orbital advancement (FOA) for metopic craniosynostosis. We evaluated preoperative, postoperative, and two-year follow-up skull morphology using CranioRate<sup>TM</sup> to generate a Metopic Severity Score (MSS), a measure of degree of metopic dysmorphology, and Cranial Morphology Deviation (CMD) score, a measure of deviation from normal skull morphology. Fifty-five patients were included, average age at surgery was 1.3 years. Sixteen patients underwent follow-up CT imaging at an average of 3.1 years. Preoperative MSS was 6.3 ± 2.5 (CMD 199.0 ± 39.1), immediate postoperative MSS was -2.0 ± 1.9 (CMD 208.0 ± 27.1), and longitudinal MSS was 1.3 ± 1.1 (CMD 179.8 ± 28.1). MSS approached normal at two-year follow-up (defined as MSS = 0). There was a significant relationship between preoperative MSS and follow-up MSS (R<sup>2 </sup>= 0.70). MSS quantifies overcorrection and normalization of head shape, as patients with negative values were less "metopic" than normal postoperatively and approached 0 at 2-year follow-up. CMD worsened postoperatively due to postoperative bony changes associated with surgical displacements following FOA. All patients had similar postoperative metopic dysmorphology, with no significant association with preoperative severity. More severe patients had worse longitudinal dysmorphology, reinforcing that regression to the metopic shape is a postoperative risk which increases with preoperative severity.

ResNet-Transformer deep learning model-aided detection of dens evaginatus.

Wang S, Liu J, Li S, He P, Zhou X, Zhao Z, Zheng L

pubmed logopapersJul 1 2025
Dens evaginatus is a dental morphological developmental anomaly. Failing to detect it may lead to tubercles fracture and pulpal/periapical disease. Consequently, early detection and intervention of dens evaginatus are significant to preserve vital pulp. This study aimed to develop a deep learning model to assist dentists in early diagnosing dens evaginatus, thereby supporting early intervention and mitigating the risk of severe consequences. In this study, a deep learning model was developed utilizing panoramic radiograph images sourced from 1410 patients aged 3-16 years, with high-quality annotations to enable the automatic detection of dens evaginatus. Model performance and model's efficacy in aiding dentists were evaluated. The findings indicated that the current deep learning model demonstrated commendable sensitivity (0.8600) and specificity (0.9200), outperforming dentists in detecting dens evaginatus with an F1-score of 0.8866 compared to their average F1-score of 0.8780, indicating that the model could detect dens evaginatus with greater precision. Furthermore, with its support, young dentists heightened their focus on dens evaginatus in tooth germs and achieved improved diagnostic accuracy. Based on these results, the integration of deep learning for dens evaginatus detection holds significance and can augment dentists' proficiency in identifying such anomaly.

Response prediction for neoadjuvant treatment in locally advanced rectal cancer patients-improvement in decision-making: A systematic review.

Boldrini L, Charles-Davies D, Romano A, Mancino M, Nacci I, Tran HE, Bono F, Boccia E, Gambacorta MA, Chiloiro G

pubmed logopapersJul 1 2025
Predicting pathological complete response (pCR) from pre or post-treatment features could be significant in improving the process of making clinical decisions and providing a more personalized treatment approach for better treatment outcomes. However, the lack of external validation of predictive models, missing in several published articles, is a major issue that can potentially limit the reliability and applicability of predictive models in clinical settings. Therefore, this systematic review described different externally validated methods of predicting response to neoadjuvant chemoradiotherapy (nCRT) in locally advanced rectal cancer (LARC) patients and how they could improve clinical decision-making. An extensive search for eligible articles was performed on PubMed, Cochrane, and Scopus between 2018 and 2023, using the keywords: (Response OR outcome) prediction AND (neoadjuvant OR chemoradiotherapy) treatment in 'locally advanced Rectal Cancer'. (i) Studies including patients diagnosed with LARC (T3/4 and N- or any T and N+) by pre-medical imaging and pathological examination or as stated by the author (ii) Standardized nCRT completed. (iii) Treatment with long or short course radiotherapy. (iv) Studies reporting on the prediction of response to nCRT with pathological complete response (pCR) as the primary outcome. (v) Studies reporting external validation results for response prediction. (vi) Regarding language restrictions, only articles in English were accepted. (i) We excluded case report studies, conference abstracts, reviews, studies reporting patients with distant metastases at diagnosis. (ii) Studies reporting response prediction with only internally validated approaches. Three researchers (DC-D, FB, HT) independently reviewed and screened titles and abstracts of all articles retrieved after de-duplication. Possible disagreements were resolved through discussion among the three researchers. If necessary, three other researchers (LB, GC, MG) were consulted to make the final decision. The extraction of data was performed using the CHecklist for critical Appraisal and data extraction for systematic Reviews of prediction Modelling Studies (CHARMS) template and quality assessment was done using the Prediction model Risk Of Bias Assessment Tool (PROBAST). A total of 4547 records were identified from the three databases. After excluding 392 duplicate results, 4155 records underwent title and abstract screening. Three thousand and eight hundred articles were excluded after title and abstract screening and 355 articles were retrieved. Out of the 355 retrieved articles, 51 studies were assessed for eligibility. Nineteen reports were then excluded due to lack of reports on external validation, while 4 were excluded due to lack of evaluation of pCR as the primary outcome. Only Twenty-eight articles were eligible and included in this systematic review. In terms of quality assessment, 89 % of the models had low concerns in the participants domain, while 11 % had an unclear rating. 96 % of the models were of low concern in both the predictors and outcome domains. The overall rating showed high applicability potential of the models with 82 % showing low concern, while 18 % were deemed unclear. Most of the external validated techniques showed promising performances and the potential to be applied in clinical settings, which is a crucial step towards evidence-based medicine. However, more studies focused on the external validations of these models in larger cohorts is necessary to ensure that they can reliably predict outcomes in diverse populations.

A multimodal deep-learning model based on multichannel CT radiomics for predicting pathological grade of bladder cancer.

Zhao T, He J, Zhang L, Li H, Duan Q

pubmed logopapersJul 1 2025
To construct a predictive model using deep-learning radiomics and clinical risk factors for assessing the preoperative histopathological grade of bladder cancer according to computed tomography (CT) images. A retrospective analysis was conducted involving 201 bladder cancer patients with definite pathological grading results after surgical excision at the organization between January 2019 and June 2023. The cohort was classified into a test set of 81 cases and a training set of 120 cases. Hand-crafted radiomics (HCR) and features derived from deep-learning (DL) were obtained from computed tomography (CT) images. The research builds a prediction model using 12 machine-learning classifiers, which integrate HCR, DL features, and clinical data. Model performance was estimated utilizing decision-curve analysis (DCA), the area under the curve (AUC), and calibration curves. Among the classifiers tested, the logistic regression model that combined DL and HCR characteristics demonstrated the finest performance. The AUC values were 0.912 (training set) and 0.777 (test set). The AUC values of clinical model achieved 0.850 (training set) and 0.804 (test set). The AUC values of the combined model were 0.933 (training set) and 0.824 (test set), outperforming both the clinical and HCR-only models. The CT-based combined model demonstrated considerable diagnostic capability in differentiating high-grade from low-grade bladder cancer, serving as a valuable noninvasive instrument for preoperative pathological evaluation.
Page 204 of 3963955 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.