Sort by:
Page 41 of 3523516 results

Application of Artificial Intelligence in Bone Quality and Quantity Assessment for Dental Implant Planning: A Scoping Review.

Qiu S, Yu X, Wu Y

pubmed logopapersAug 8 2025
To assess how artificial intelligence (AI) models perform in evaluating bone quality and quantity in the preoperative planning process for dental implants. This review included studies that utilized AI-based assessments of bone quality and/or quantity based on radiographic images in the preoperative phase. Studies published in English before April 2025 were used in this review, which were obtained from searches in PubMed/MEDLINE, Embase, Web of Science, Scopus, and the Cochrane Library, as well as from manual searches. Eleven studies met the inclusion criteria. Five studies focused on bone quality evaluation and six studies included volumetric assessments using AI models. The performance measures included accuracy, sensitivity, specificity, precision, F1 score, and Dice coefficient, and were compared with human expert evaluations. AI models demonstrated high accuracy (76.2%-99.84%), high sensitivity (78.9%-100%), and high specificity (66.2%-99%). AI models have potential for the evaluation of bone quality and quantity, although standardization and external validation studies are lacking. Future studies should propose multicenter datasets, integration into clinical workflows, and the development of refined models to better reflect real-life conditions. AI has the potential to offer clinicians with reliable automated evaluations of bone quality and quantity, with the promise of a fully automated system of implant planning. It may also support preoperative workflows for clinical decision-making based on evidence more efficiently.

Automated coronary artery segmentation / tissue characterization and detection of lipid-rich plaque: An integrated backscatter intravascular ultrasound study.

Masuda Y, Takeshita R, Tsujimoto A, Sahashi Y, Watanabe T, Fukuoka D, Hara T, Kanamori H, Okura H

pubmed logopapersAug 8 2025
Intravascular ultrasound (IVUS)-based tissue characterization has been used to detect vulnerable plaque or lipid-rich plaque (LRP). Recently, advancements in artificial intelligence (AI) technology have enabled automated coronary arterial plaque segmentation and tissue characterization. The purpose of this study was to evaluate the feasibility and diagnostic accuracy of a deep learning model for plaque segmentation, tissue characterization and identification of LRP. A total of 1,098 IVUS images from 67 patients who underwent IVUS-guided percutaneous coronary intervention were selected for the training group, while 1,100 IVUS images from 100 vessels (88 patients) were used for the validation group. A 7-layer U-Net ++ was applied for automated coronary artery segmentation and tissue characterization. Segmentation and quantification of the external elastic membrane (EEM), lumen and guidewire artifact were performed and compared with manual measurements. Plaque tissue characterization was conducted using integrated backscatter (IB)-IVUS as the gold standard. LRP was defined as %lipid area of ≥65 %. The deep learning model accurately segmented EEM and lumen. AI-predicted %lipid area (R = 0.90, P < 0.001), % fibrosis area (R = 0.89, P < 0.001), %dense fibrosis area (R = 0.81, P < 0.001) and % calcification area (R = 0.89, P < 0.001), showed strong correlation with IB-IVUS measurements. The model predicted LRP with a sensitivity of 62 %, specificity of 94 %, positive predictive value of 69 %, negative predictive value of 92 % and an area under the receiver operating characteristic curve of 0.919 (95 % CI:0.902-0.934), respectively. The deep-learning model demonstrated accurate automatic segmentation and tissue characterization of human coronary arteries, showing promise for identifying LRP.

Fourier Optics and Deep Learning Methods for Fast 3D Reconstruction in Digital Holography

Justin London

arxiv logopreprintAug 8 2025
Computer-generated holography (CGH) is a promising method that modulates user-defined waveforms with digital holograms. An efficient and fast pipeline framework is proposed to synthesize CGH using initial point cloud and MRI data. This input data is reconstructed into volumetric objects that are then input into non-convex Fourier optics optimization algorithms for phase-only hologram (POH) and complex-hologram (CH) generation using alternating projection, SGD, and quasi-Netwton methods. Comparison of reconstruction performance of these algorithms as measured by MSE, RMSE, and PSNR is analyzed as well as to HoloNet deep learning CGH. Performance metrics are shown to be improved by using 2D median filtering to remove artifacts and speckled noise during optimization.

Deep Learning Chest X-Ray Age, Epigenetic Aging Clocks and Associations with Age-Related Subclinical Disease in the Project Baseline Health Study.

Chandra J, Short S, Rodriguez F, Maron DJ, Pagidipati N, Hernandez AF, Mahaffey KW, Shah SH, Kiel DP, Lu MT, Raghu VK

pubmed logopapersAug 8 2025
Chronological age is an important component of medical risk scores and decision-making. However, there is considerable variability in how individuals age. We recently published an open-source deep learning model to assess biological age from chest radiographs (CXR-Age), which predicts all-cause and cardiovascular mortality better than chronological age. Here, we compare CXR-Age to two established epigenetic aging clocks (First generation-Horvath Age; Second generation-DNAm PhenoAge) to test which is more strongly associated with cardiopulmonary disease and frailty. Our cohort consisted of 2,097 participants from the Project Baseline Health Study, a prospective cohort study of individuals from four US sites. We compared the association between the different aging clocks and measures of cardiopulmonary disease, frailty, and protein abundance collected at the participant's first annual visit using linear regression models adjusted for common confounders. We found that CXR-Age was associated with coronary calcium, cardiovascular risk factors, worsening pulmonary function, increased frailty, and abundance in plasma of two proteins implicated in neuroinflammation and aging. Associations with DNAm PhenoAge were weaker for pulmonary function and all metrics in middle-age adults. We identified thirteen proteins that were associated with DNAm PhenoAge, one (CDH13) of which was also associated with CXR-Age. No associations were found with Horvath Age. These results suggest that CXR-Age may serve as a better metric of cardiopulmonary aging than epigenetic aging clocks, especially in midlife adults.

An Anisotropic Cross-View Texture Transfer with Multi-Reference Non-Local Attention for CT Slice Interpolation.

Uhm KH, Cho H, Hong SH, Jung SW

pubmed logopapersAug 8 2025
Computed tomography (CT) is one of the most widely used non-invasive imaging modalities for medical diagnosis. In clinical practice, CT images are usually acquired with large slice thicknesses due to the high cost of memory storage and operation time, resulting in an anisotropic CT volume with much lower inter-slice resolution than in-plane resolution. Since such inconsistent resolution may lead to difficulties in disease diagnosis, deep learning-based volumetric super-resolution methods have been developed to improve inter-slice resolution. Most existing methods conduct single-image super-resolution on the through-plane or synthesize intermediate slices from adjacent slices; however, the anisotropic characteristic of 3D CT volume has not been well explored. In this paper, we propose a novel cross-view texture transfer approach for CT slice interpolation by fully utilizing the anisotropic nature of 3D CT volume. Specifically, we design a unique framework that takes high-resolution in-plane texture details as a reference and transfers them to low-resolution through-plane images. To this end, we introduce a multi-reference non-local attention module that extracts meaningful features for reconstructing through-plane high-frequency details from multiple in-plane images. Through extensive experiments, we demonstrate that our method performs significantly better in CT slice interpolation than existing competing methods on public CT datasets including a real-paired benchmark, verifying the effectiveness of the proposed framework. The source code of this work is available at https://github.com/khuhm/ACVTT.

GPT-4 for automated sequence-level determination of MRI protocols based on radiology request forms from clinical routine.

Terzis R, Kaya K, Schömig T, Janssen JP, Iuga AI, Kottlors J, Lennartz S, Gietzen C, Gözdas C, Müller L, Hahnfeldt R, Maintz D, Dratsch T, Pennig L

pubmed logopapersAug 8 2025
This study evaluated GPT-4's accuracy in MRI sequence selection based on radiology request forms (RRFs), comparing its performance to radiology residents. This retrospective study included 100 RRFs across four subspecialties (cardiac imaging, neuroradiology, musculoskeletal, and oncology). GPT-4 and two radiology residents (R1: 2 years, R2: 5 years MRI experience) selected sequences based on each patient's medical history and clinical questions. Considering imaging society guidelines, five board-certified specialized radiologists assessed protocols based on completeness, quality, and utility in consensus, using 5-point Likert scales. Clinical applicability was rated binarily by the institution's lead radiographer. GPT-4 achieved median scores of 3 (1-5) for completeness, 4 (1-5) for quality, and 4 (1-5) for utility, comparable to R1 (3 (1-5), 4 (1-5), 4 (1-5); each p > 0.05) but inferior to R2 (4 (1-5), 5 (1-5); p < 0.01, respectively, and 5 (1-5); p < 0.001). Subspecialty protocol quality varied: GPT-4 matched R1 (4 (2-4) vs. 4 (2-5), p = 0.20) and R2 (4 (2-5); p = 0.47) in cardiac imaging; showed no differences in neuroradiology (all 5 (1-5), p > 0.05); scored lower than R1 and R2 in musculoskeletal imaging (3 (2-5) vs. 4 (3-5); p < 0.01, and 5 (3-5); p < 0.001); and matched R1 (4 (1-5) vs. 2 (1-4), p = 0.12) as well as R2 (5 (2-5); p = 0.20) in oncology. GPT-4-based protocols were clinically applicable in 95% of cases, comparable to R1 (95%) and R2 (96%). GPT-4 generated MRI protocols with notable completeness, quality, utility, and clinical applicability, excelling in standardized subspecialties like cardiac and neuroradiology imaging while yielding lower accuracy in musculoskeletal examinations. Question Long MRI acquisition times limit patient access, making accurate protocol selection crucial for efficient diagnostics, though it's time-consuming and error-prone, especially for inexperienced residents. Findings GPT-4 generated MRI protocols of remarkable yet inconsistent quality, performing on par with an experienced resident in standardized fields, but moderately in musculoskeletal examinations. Clinical relevance The large language model can assist less experienced radiologists in determining detailed MRI protocols and counteract increasing workloads. The model could function as a semi-automatic tool, generating MRI protocols for radiologists' confirmation, optimizing resource allocation, and improving diagnostics and cost-effectiveness.

GAN-MRI enhanced multi-organ MRI segmentation: a deep learning perspective.

Channarayapatna Srinivasa A, Bhat SS, Baduwal D, Sim ZTJ, Patil SS, Amarapur A, Prakash KNB

pubmed logopapersAug 8 2025
Clinical magnetic resonance imaging (MRI) is a high-resolution tool widely used for detailed anatomical imaging. However, prolonged scan times often lead to motion artefacts and patient discomfort. Fast acquisition techniques can reduce scan times but often produce noisy, low-contrast images, compromising segmentation accuracy essential for diagnosis and treatment planning. To address these limitations, we developed an end-to-end framework that incorporates BIDS-based data organiser and anonymizer, a GAN-based MR image enhancement model (GAN-MRI), AssemblyNet for brain region segmentation, and an attention-residual U-Net with Guided loss for abdominal and thigh segmentation. Thirty brain scans (5,400 slices) and 32 abdominal (1,920 slices) and 55 thigh scans (2,200 slices) acquired from multiple MRI scanners (GE, Siemens, Toshiba) underwent evaluation. Image quality improved significantly, with SNR and CNR for brain scans increasing from 28.44 to 42.92 (p < 0.001) and 11.88 to 18.03 (p < 0.001), respectively. Abdominal scans exhibited SNR increases from 35.30 to 50.24 (p < 0.001) and CNR from 10,290.93 to 93,767.22 (p < 0.001). Double-blind evaluations highlighted improved visualisations of anatomical structures and bias field correction. Segmentation performance improved substantially in the thigh (muscle: + 21%, IMAT: + 9%) and abdominal regions (SSAT: + 1%, DSAT: + 2%, VAT: + 12%), while brain segmentation metrics remained largely stable, reflecting the robustness of the baseline model. Proposed framework is designed to handle data from multiple anatomies with variations from different MRI scanners and centres by enhancing MRI scan and improving segmentation accuracy, diagnostic precision and treatment planning while reducing scan times and maintaining patient comfort.

MRI-based radiomics for preoperative T-staging of rectal cancer: a retrospective analysis.

Patanè V, Atripaldi U, Sansone M, Marinelli L, Del Tufo S, Arrichiello G, Ciardiello D, Selvaggi F, Martinelli E, Reginelli A

pubmed logopapersAug 8 2025
Preoperative T-staging in rectal cancer is essential for treatment planning, yet conventional MRI shows limited accuracy (~ 60-78). Our study investigates whether radiomic analysis of high-resolution T2-weighted MRI can non-invasively improve staging accuracy through a retrospective evaluation in a real-world surgical cohort. This single-center retrospective study included 200 patients (January 2024-April 2025) with pathologically confirmed rectal cancer, all undergoing preoperative high-resolution T2-weighted MRI within one week prior to curative surgery and no neoadjuvant therapy. Manual segmentation was performed using ITK‑SNAP, followed by extraction of 107 radiomic features via PyRadiomics. Feature selection employed mRMR and LASSO logistic regression, culminating in a Rad-score predictive model. Statistical performance was evaluated using ROC curves (AUC), accuracy, sensitivity, specificity, and Delong's test. Among 200 patients, 95 were pathologically staged as T2 and 105 as T3-T4 (55 T3, 50 T4). After preprocessing, 26 radiomic features were retained; key features including ngtdm_contrast and ngtdm_coarseness showed AUC values > 0.70. The LASSO-based model achieved an AUC of 0.82 (95% CI: 0.75-0.89), with overall accuracy of 81%, sensitivity of 78%, and specificity of 84%. Radiomic analysis of standard preoperative T2-weighted MRI provides a reliable, non-invasive method to predict rectal cancer T-stage. This approach has the potential to enhance staging accuracy and inform personalized surgical planning. Prospective multicenter validation is required for broader clinical implementation.

Machine learning diagnostic model for amyotrophic lateral sclerosis analysis using MRI-derived features.

Gil Chong P, Mazon M, Cerdá-Alberich L, Beser Robles M, Carot JM, Vázquez-Costa JF, Martí-Bonmatí L

pubmed logopapersAug 8 2025
Amyotrophic Lateral Sclerosis is a devastating motor neuron disease characterized by its diagnostic difficulty. Currently, no reliable biomarkers exist in the diagnosis process. In this scenario, our purpose is the application of machine learning algorithms to imaging MRI-derived variables for the development of diagnostic models that facilitate and shorten the process. A dataset of 211 patients (114 ALS, 45 mimic, 22 genetic carriers and 30 control) with MRI-derived features of volumetry, cortical thickness and local iron (via T2* mapping, and visual assessment of susceptibility imaging). A binary classification task approach has been taken to classify patients with and without ALS. A sequential modeling methodology, understood from an iterative improvement perspective, has been followed, analyzing each group's performance separately to adequately improve modelling. Feature filtering techniques, dimensionality reduction techniques (PCA, kernel PCA), oversampling techniques (SMOTE, ADASYN) and classification techniques (logistic regression, LASSO, Ridge, ElasticNet, Support Vector Classifier, K-neighbors, random forest) were included. Three subsets of available data have been used for each proposed architecture: a subset containing automatic retrieval MRI-derived data, a subset containing the variables from the visual analysis of the susceptibility imaging and a subset containing all features. The best results have been attained with all the available data through a voting classifier composed of five different classifiers: accuracy = 0.896, AUC = 0.929, sensitivity = 0.886, specificity = 0.929. These results confirm the potential of ML techniques applied to imaging variables of volumetry, cortical thickness, and local iron for the development of diagnostic model as a clinical tool for decision-making support.

Development and validation of a transformer-based deep learning model for predicting distant metastasis in non-small cell lung cancer using <sup>18</sup>FDG PET/CT images.

Hu N, Luo Y, Tang M, Yan G, Yuan S, Li F, Lei P

pubmed logopapersAug 8 2025
This study aimed to develop and validate a hybrid deep learning (DL) model that integrates convolutional neural network (CNN) and vision transformer (ViT) architectures to predict distant metastasis (DM) in patients with non-small cell lung cancer (NSCLC) using <sup>18</sup>F-FDG PET/CT images. A retrospective analysis was conducted on a cohort of consecutively registered patients who were newly diagnosed and untreated for NSCLC. A total of 167 patients with available PET/CT images were included in the analysis. DL features were extracted using a combination of CNN and ViT architectures, followed by feature selection, model construction, and evaluation of model performance using the receiver operating characteristic (ROC) and the area under the curve (AUC). The ViT-based DL model exhibited strong predictive capabilities in both the training and validation cohorts, achieving AUCs of 0.824 and 0.830 for CT features, and 0.602 and 0.694 for PET features, respectively. Notably, the model that integrated both PET and CT features demonstrated a notable AUC of 0.882 in the validation cohort, outperforming models that utilized either PET or CT features alone. Furthermore, this model outperformed the CNN model (ResNet 50), which achieved an AUC of 0.752 [95% CI 0.613, 0.890], p < 0.05. Decision curve analysis further supported the efficacy of the ViT-based DL model. The ViT-based DL developed in this study demonstrates considerable potential in predicting DM in patients with NSCLC, potentially informing the creation of personalized treatment strategies. Future validation through prospective studies with larger cohorts is necessary.
Page 41 of 3523516 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.