Sort by:
Page 116 of 2052045 results

Predicting strength of femora with metastatic lesions from single 2D radiographic projections using convolutional neural networks.

Synek A, Benca E, Licandro R, Hirtler L, Pahr DH

pubmed logopapersJun 1 2025
Patients with metastatic bone disease are at risk of pathological femoral fractures and may require prophylactic surgical fixation. Current clinical decision support tools often overestimate fracture risk, leading to overtreatment. While novel scores integrating femoral strength assessment via finite element (FE) models show promise, they require 3D imaging, extensive computation, and are difficult to automate. Predicting femoral strength directly from single 2D radiographic projections using convolutional neural networks (CNNs) could address these limitations, but this approach has not yet been explored for femora with metastatic lesions. This study aimed to test whether CNNs can accurately predict strength of femora with metastatic lesions from single 2D radiographic projections. CNNs with various architectures were developed and trained using an FE model generated training dataset. This training dataset was based on 36,000 modified computed tomography (CT) scans, created by randomly inserting artificial lytic lesions into the CT scans of 36 intact anatomical femoral specimens. From each modified CT scan, an anterior-posterior 2D projection was generated and femoral strength in one-legged stance was determined using nonlinear FE models. Following training, the CNN performance was evaluated on an independent experimental test dataset consisting of 31 anatomical femoral specimens (16 intact, 15 with artificial lytic lesions). 2D projections of each specimen were created from corresponding CT scans and femoral strength was assessed in mechanical tests. The CNNs' performance was evaluated using linear regression analysis and compared to 2D densitometric predictors (bone mineral density and content) and CT-based 3D FE models. All CNNs accurately predicted the experimentally measured strength in femora with and without metastatic lesions of the test dataset (R²≥0.80, CCC≥0.81). In femora with metastatic lesions, the performance of the CNNs (best: R²=0.84, CCC=0.86) was considerably superior to 2D densitometric predictors (R²≤0.07) and slightly inferior to 3D FE models (R²=0.90, CCC=0.94). CNNs, trained on a large dataset generated via FE models, predicted experimentally measured strength of femora with artificial metastatic lesions with accuracy comparable to 3D FE models. By eliminating the need for 3D imaging and reducing computational demands, this novel approach demonstrates potential for application in a clinical setting.

Predicting hepatocellular carcinoma response to TACE: A machine learning study based on 2.5D CT imaging and deep features analysis.

Lin C, Cao T, Tang M, Pu W, Lei P

pubmed logopapersJun 1 2025
Prior to the commencement of treatment, it is essential to establish an objective method for accurately predicting the prognosis of patients with hepatocellular carcinoma (HCC) undergoing transarterial chemoembolization (TACE). In this study, we aimed to develop a machine learning (ML) model to predict the response of HCC patients to TACE based on CT images analysis. Public dataset from The Cancer Imaging Archive (TCIA), uploaded in August 2022, comprised a total of 105 cases, including 68 males and 37 females. The external testing dataset was collected from March 1, 2019 to July 1, 2022, consisting of total of 26 patients who underwent TACE treatment at our institution and were followed up for at least 3 months after TACE, including 22 males and 4 females. The public dataset was utilized for ResNet50 transfer learning and ML model construction, while the external testing dataset was used for model performance evaluation. All CT images with the largest lesions in axial, sagittal, and coronal orientations were selected to construct 2.5D images. Pre-trained ResNet50 weights were adapted through transfer learning to serve as a feature extractor to derive deep features for building ML models. Model performance was assessed using area under the curve (AUC), accuracy, F1-Score, confusion matrix analysis, decision curves, and calibration curves. The AUC values for the external testing dataset were 0.90, 0.90, 0.91, and 0.89 for random forest classifier (RFC), support vector classifier (SVC), logistic regression (LR), and extreme gradient boosting (XGB), respectively. The accuracy values for the external testing dataset were 0.79, 0.81, 0.80, and 0.80 for RFC, SVC, LR, and XGB, respectively. The F1-score values for the external testing dataset were 0.75, 0.77, 0.78, and 0.79 for RFC, SVC, LR, and XGB, respectively. The ML model constructed using deep features from 2.5D images has the potential to be applied in predicting the prognosis of HCC patients following TACE treatment.

Advanced image preprocessing and context-aware spatial decomposition for enhanced breast cancer segmentation.

Kalpana G, Deepa N, Dhinakaran D

pubmed logopapersJun 1 2025
The segmentation of breast cancer diagnosis and medical imaging contains issues such as noise, variation in contrast, and low resolutions which make it challenging to distinguish malignant sites. In this paper, we propose a new solution that integrates with AIPT (Advanced Image Preprocessing Techniques) and CASDN (Context-Aware Spatial Decomposition Network) to overcome these problems. The preprocessing pipeline apply bunch of methods including Adaptive Thresholding, Hierarchical Contrast Normalization, Contextual Feature Augmentation, Multi-Scale Region Enhancement, and Dynamic Histogram Equalization for image quality. These methods smooth edges, equalize the contrasting picture and inlay contextual details in a way which effectively eliminate the noise and make the images clearer and with fewer distortions. Experimental outcomes demonstrate its effectiveness by delivering a Dice Coefficient of 0.89, IoU of 0.85, and a Hausdorff Distance of 5.2 demonstrating its enhanced capability in segmenting significant tumor margins over other techniques. Furthermore, the use of the improved preprocessing pipeline benefits classification models with improved Convolutional Neural Networks having a classification accuracy of 85.3 % coupled with AUC-ROC of 0.90 which shows a significant enhancement from conventional techniques.•Enhanced segmentation accuracy with advanced preprocessing and CASDN, achieving superior performance metrics.•Robust multi-modality compatibility, ensuring effectiveness across mammograms, ultrasounds, and MRI scans.

Machine Learning Models in the Detection of MB2 Canal Orifice in CBCT Images.

Shetty S, Yuvali M, Ozsahin I, Al-Bayatti S, Narasimhan S, Alsaegh M, Al-Daghestani H, Shetty R, Castelino R, David LR, Ozsahin DU

pubmed logopapersJun 1 2025
The objective of the present study was to determine the accuracy of machine learning (ML) models in the detection of mesiobuccal (MB2) canals in axial cone-beam computed tomography (CBCT) sections. A total of 2500 CBCT scans from the oral radiology department of University Dental Hospital, Sharjah were screened to obtain 277 high-resolution, small field-of-view CBCT scans with maxillary molars. Among the 277 scans, 160 of them showed the presence of MB2 orifice and the rest (117) did not. Two-dimensional axial images of these scans were then cropped. The images were classified and labelled as N (absence of MB2) and M (presence of MB2) by 2 examiners. The images were embedded using Google's Inception V3 and transferred to the ML classification model. Six different ML models (logistic regression [LR], naïve Bayes [NB], support vector machine [SVM], K-nearest neighbours [Knn], random forest [RF], neural network [NN]) were then tested on their ability to classify the images into M and N. The classification metrics (area under curve [AUC], accuracy, F1-score, precision) of the models were assessed in 3 steps. NN (0.896), LR (0.893), and SVM (0.886) showed the highest values of AUC with specified target variables (steps 2 and 3). The highest accuracy was exhibited by LR (0.849) and NN (0.848) with specified target variables. The highest precision (86.8%) and recall (92.5%) was observed with the SVM model. The success rates (AUC, precision, recall) of ML algorithms in the detection of MB2 were remarkable in our study. It was also observed that when the target variable was specified, significant success rates such as 86.8% in precision and 92.5% in recall were achieved. The present study showed promising results in the ML-based detection of MB2 canal using axial CBCT slices.

Fast aberration correction in 3D transcranial photoacoustic computed tomography via a learning-based image reconstruction method.

Huang HK, Kuo J, Zhang Y, Aborahama Y, Cui M, Sastry K, Park S, Villa U, Wang LV, Anastasio MA

pubmed logopapersJun 1 2025
Transcranial photoacoustic computed tomography (PACT) holds significant potential as a neuroimaging modality. However, compensating for skull-induced aberrations in reconstructed images remains a challenge. Although optimization-based image reconstruction methods (OBRMs) can account for the relevant wave physics, they are computationally demanding and generally require accurate estimates of the skull's viscoelastic parameters. To circumvent these issues, a learning-based image reconstruction method was investigated for three-dimensional (3D) transcranial PACT. The method was systematically assessed in virtual imaging studies that involved stochastic 3D numerical head phantoms and applied to experimental data acquired by use of a physical head phantom that involved a human skull. The results demonstrated that the learning-based method yielded accurate images and exhibited robustness to errors in the assumed skull properties, while substantially reducing computational times compared to an OBRM. To the best of our knowledge, this is the first demonstration of a learned image reconstruction method for 3D transcranial PACT.

Integrating finite element analysis and physics-informed neural networks for biomechanical modeling of the human lumbar spine.

Ahmadi M, Biswas D, Paul R, Lin M, Tang Y, Cheema TS, Engeberg ED, Hashemi J, Vrionis FD

pubmed logopapersJun 1 2025
Comprehending the biomechanical characteristics of the human lumbar spine is crucial for managing and preventing spinal disorders. Precise material properties derived from patient-specific CT scans are essential for simulations to accurately mimic real-life scenarios, which is invaluable in creating effective surgical plans. The integration of Finite Element Analysis (FEA) with Physics-Informed Neural Networks (PINNs) offers significant clinical benefits by automating lumbar spine segmentation and meshing. We developed a FEA model of the lumbar spine incorporating detailed anatomical and material properties derived from high-quality CT and MRI scans. The model includes vertebrae and intervertebral discs, segmented and meshed using advanced imaging and computational techniques. PINNs were implemented to integrate physical laws directly into the neural network training process, ensuring that the predictions of material properties adhered to the governing equations of mechanics. The model achieved an accuracy of 94.30% in predicting material properties such as Young's modulus (14.88 GPa for cortical bone and 1.23 MPa for intervertebral discs), Poisson's ratio (0.25 and 0.47, respectively), bulk modulus (9.87 GPa and 6.56 MPa, respectively), and shear modulus (5.96 GPa and 0.42 MPa, respectively). We developed a lumbar spine FEA model using anatomical and material properties from CT and MRI scans. Vertebrae and discs were segmented and meshed with advanced imaging techniques, while PINNs ensured material predictions followed mechanical laws. The integration of FEA and PINNs allows for accurate, automated prediction of material properties and mechanical behaviors of the lumbar spine, significantly reducing manual input and enhancing reliability. This approach ensures dependable biomechanical simulations and supports the development of personalized treatment plans and surgical strategies, ultimately improving clinical outcomes for spinal disorders. This method improves surgical planning and outcomes, contributing to better patient care and recovery in spinal disorders.

Focal cortical dysplasia detection by artificial intelligence using MRI: A systematic review and meta-analysis.

Dashtkoohi M, Ghadimi DJ, Moodi F, Behrang N, Khormali E, Salari HM, Cohen NT, Gholipour T, Saligheh Rad H

pubmed logopapersJun 1 2025
Focal cortical dysplasia (FCD) is a common cause of pharmacoresistant epilepsy. However, it can be challenging to detect FCD using MRI alone. This study aimed to review and analyze studies that used machine learning and artificial neural networks (ANN) methods as an additional tool to enhance MRI findings in FCD patients. A systematic search was conducted in four databases (Embase, PubMed, Scopus, and Web of Science). The quality of the studies was assessed using QUADAS-AI, and a bivariate random-effects model was used for analysis. The main outcome analyzed was the sensitivity and specificity of patient-wise outcomes. Heterogeneity among studies was assessed using I<sup>2</sup>. A total of 41 studies met the inclusion criteria, including 24 ANN-based studies and 17 machine learning studies. Meta-analysis of internal validation datasets showed a pooled sensitivity of 0.81 and specificity of 0.92 for AI-based models in detecting FCD lesions. Meta-analysis of external validation datasets yielded a pooled sensitivity of 0.73 and specificity of 0.66. There was moderate heterogeneity among studies in the external validation dataset, but no significant publication bias was found. Although there is an increasing number of machine learning and ANN-based models for FCD detection, their clinical applicability remains limited. Further refinement and optimization, along with longitudinal studies, are needed to ensure their integration into clinical practice. Addressing the identified limitations and intensifying research efforts will improve their relevance and reliability in real medical scenarios.

The integration of artificial intelligence into clinical medicine: Trends, challenges, and future directions.

Aravazhi PS, Gunasekaran P, Benjamin NZY, Thai A, Chandrasekar KK, Kolanu ND, Prajjwal P, Tekuru Y, Brito LV, Inban P

pubmed logopapersJun 1 2025
AI has emerged as a transformative force in clinical medicine, changing the diagnosis, treatment, and management of patients. Tools have been derived for working with ML, DL, and NLP algorithms to analyze large complex medical datasets with unprecedented accuracy and speed, thereby improving diagnostic precision, treatment personalization, and patient care outcomes. For example, CNNs have dramatically improved the accuracy of medical imaging diagnoses, and NLP algorithms have greatly helped extract insights from unstructured data, including EHRs. However, there are still numerous challenges that face AI integration into clinical workflows, including data privacy, algorithmic bias, ethical dilemmas, and problems with the interpretability of "black-box" AI models. These barriers have thus far prevented the widespread application of AI in health care, and its possible trends, obstacles, and future implications are necessary to be systematically explored. The purpose of this paper is, therefore, to assess the current trends in AI applications in clinical medicine, identify those obstacles that are hindering adoption, and identify possible future directions. This research hopes to synthesize evidence from other peer-reviewed articles to provide a more comprehensive understanding of the role that AI plays to advance clinical practices, improve patient outcomes, or enhance decision-making. A systematic review was done according to the PRISMA guidelines to explore the integration of Artificial Intelligence in clinical medicine, including trends, challenges, and future directions. PubMed, Cochrane Library, Web of Science, and Scopus databases were searched for peer-reviewed articles from 2014 to 2024 with keywords such as "Artificial Intelligence in Medicine," "AI in Clinical Practice," "Machine Learning in Healthcare," and "Ethical Implications of AI in Medicine." Studies focusing on AI application in diagnostics, treatment planning, and patient care reporting measurable clinical outcomes were included. Non-clinical AI applications and articles published before 2014 were excluded. Selected studies were screened for relevance, and then their quality was critically appraised to synthesize data reliably and rigorously. This systematic review includes the findings of 8 studies that pointed out the transformational role of AI in clinical medicine. AI tools, such as CNNs, had diagnostic accuracy more than the traditional methods, particularly in radiology and pathology. Predictive models efficiently supported risk stratification, early disease detection, and personalized medicine. Despite these improvements, significant hurdles, including data privacy, algorithmic bias, and resistance from clinicians regarding the "black-box" nature of AI, had yet to be surmounted. XAI has emerged as an attractive solution that offers the promise to enhance interpretability and trust. As a whole, AI appeared promising in enhancing diagnostics, treatment personalization, and clinical workflows by dealing with systemic inefficiencies. The transformation potential of AI in clinical medicine can transform diagnostics, treatment strategies, and efficiency. Overcoming obstacles such as concerns about data privacy, the danger of algorithmic bias, and difficulties with interpretability may pave the way for broader use and facilitate improvement in patient outcomes while transforming clinical workflows to bring sustainability into healthcare delivery.

Brain Age Gap Associations with Body Composition and Metabolic Indices in an Asian Cohort: An MRI-Based Study.

Lee HJ, Kuo CY, Tsao YC, Lee PL, Chou KH, Lin CJ, Lin CP

pubmed logopapersJun 1 2025
Global aging raises concerns about cognitive health, metabolic disorders, and sarcopenia. Prevention of reversible decline and diseases in middle-aged individuals is essential for promoting healthy aging. We hypothesize that changes in body composition, specifically muscle mass and visceral fat, and metabolic indices are associated with accelerated brain aging. To explore these relationships, we employed a brain age model to investigate the links between the brain age gap (BAG), body composition, and metabolic markers. Using T1-weighted anatomical brain MRIs, we developed a machine learning model to predict brain age from gray matter features, trained on 2,675 healthy individuals aged 18-92 years. This model was then applied to a separate cohort of 458 Taiwanese adults (57.8 years ± 11.6; 280 men) to assess associations between BAG, body composition quantified by MRI, and metabolic markers. Our model demonstrated reliable generalizability for predicting individual age in the clinical dataset (MAE = 6.11 years, r = 0.900). Key findings included significant correlations between larger BAG and reduced total abdominal muscle area (r = -0.146, p = 0.018), lower BMI-adjusted skeletal muscle indices, (r = -0.134, p = 0.030), increased systemic inflammation, as indicated by high-sensitivity C-reactive protein levels (r = 0.121, p = 0.048), and elevated fasting glucose levels (r = 0.149, p = 0.020). Our findings confirm that muscle mass and metabolic health decline are associated with accelerated brain aging. Interventions to improve muscle health and metabolic control may mitigate adverse effects of brain aging, supporting healthier aging trajectories.

Evaluating the prognostic significance of artificial intelligence-delineated gross tumor volume and prostate volume measurements for prostate radiotherapy.

Adleman J, McLaughlin PY, Tsui JMG, Buzurovic I, Harris T, Hudson J, Urribarri J, Cail DW, Nguyen PL, Orio PF, Lee LK, King MT

pubmed logopapersJun 1 2025
Artificial intelligence (AI) may extract prognostic information from MRI for localized prostate cancer. We evaluate whether AI-derived prostate and gross tumor volume (GTV) are associated with toxicity and oncologic outcomes after radiotherapy. We conducted a retrospective study of patients, who underwent radiotherapy between 2010 and 2017. We trained an AI segmentation algorithm to contour the prostate and GTV from patients treated with external-beam RT, and applied the algorithm to those treated with brachytherapy. AI prostate and GTV volumes were calculated from segmentation results. We evaluated whether AI GTV volume was associated with biochemical failure (BF) and metastasis. We evaluated whether AI prostate volume was associated with acute and late grade 2+ genitourinary toxicity, and International Prostate Symptom Score (IPSS) resolution for monotherapy and combination sets, separately. We identified 187 patients who received brachytherapy (monotherapy (N = 154) or combination therapy (N = 33)). AI GTV volume was associated with BF (hazard ratio (HR):1.28[1.14,1.44];p < 0.001) and metastasis (HR:1.34[1.18,1.53;p < 0.001). For the monotherapy subset, AI prostate volume was associated with both acute (adjusted odds ratio:1.16[1.07,1.25];p < 0.001) and late grade 2 + genitourinary toxicity (adjusted HR:1.04[1.01,1.07];p = 0.01), but not IPSS resolution (0.99[0.97,1.00];p = 0.13). For the combination therapy subset, AI prostate volume was not associated with either acute (p = 0.72) or late (p = 0.75) grade 2 + urinary toxicity. However, AI prostate volume was associated with IPSS resolution (0.96[0.93, 0.99];p = 0.01). AI-derived prostate and GTV volumes may be prognostic for toxicity and oncologic outcomes after RT. Such information may aid in treatment decision-making, given differences in outcomes among RT treatment modalities.
Page 116 of 2052045 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.