Sort by:
Page 273 of 3493486 results

Preclinical Investigation of Artificial Intelligence-Assisted Implant Surgery Planning for Single Tooth Defects: A Case Series Study.

Ma H, Wu Y, Bai H, Xu Z, Ding P, Deng X, Tang Z

pubmed logopapersJun 12 2025
Dental implant surgery has become a prevalent treatment option for patients with single tooth defects. However, the success of this surgery relies heavily on precise planning and execution. This study investigates the application of artificial intelligence (AI) in assisting the planning process of dental implant surgery for single tooth defects. Single tooth defects in the oral cavity pose a significant challenge in restorative dentistry. Dental implant restoration has emerged as an effective solution for rehabilitating such defects. However, the complexity of the procedure and the need for accurate treatment planning necessitate the integration of advanced technologies. In this study, we propose the utilisation of AI to enhance the precision and efficiency of implant surgery planning for single tooth defects. A total of twenty patients with single tooth loss were enrolled. Cone-beam computed tomography (CBCT) and intra-oral scans were obtained and imported into the AI-dentist software for 3D reconstruction. AI assisted in implant selection, tooth position identification, and crown fabrication. Evaluation included subjective verification and objective assessments. A paired samples t-test was used to compare planning times (dentist vs. AI), with a significance level of p < 0.05. Twenty patients (9 male, 11 female; mean age 59.5 ± 11.86 years) with single missing teeth participated in this study. Implant margins were carefully positioned: 3.05 ± 1.44 mm from adjacent roots, 2.52 ± 0.65 mm from bone plate edges, 3.05 ± 1.44 mm from sinus/canal, and 3.85 ± 1.23 mm from gingival height. Manual planning (21.50 ± 4.87 min) was statistically significantly slower than AI (11.84 ± 3.22 min, p < 0.01). Implant planning met 100% buccolingual/proximal/distal bone volume criteria and 90% sinus/canal distance criteria. Two patients required sinus lifting and bone grafting due to insufficient bone volume. This study highlights the promising role of AI in enhancing the precision and efficiency of dental implant surgery planning for single tooth defects. Further studies are necessary to validate the effectiveness and safety of AI-assisted planning in a larger patient population.

CT-based deep learning model for improved disease-free survival prediction in clinical stage I lung cancer: a real-world multicenter study.

Fu Y, Hou R, Qian L, Feng W, Zhang Q, Yu W, Cai X, Liu J, Wang Y, Ding Z, Xu Y, Zhao J, Fu X

pubmed logopapersJun 12 2025
To develop a deep learning (DL) model for predicting disease-free survival (DFS) in clinical stage I lung cancer patients who underwent surgical resection using pre-treatment CT images, and further validate it in patients receiving stereotactic body radiation therapy (SBRT). A retrospective cohort of 2489 clinical stage I non-small cell lung cancer (NSCLC) patients treated with operation (2015-2017) was enrolled to develop a DL-based DFS prediction model. Tumor features were extracted from CT images using a three-dimensional convolutional neural network. External validation was performed on 248 clinical stage I patients receiving SBRT from two hospitals. A clinical model was constructed by multivariable Cox regression for comparison. Model performance was evaluated with Harrell's concordance index (C-index), which measures the model's ability to correctly rank survival times by comparing all possible pairs of subjects. In the surgical cohort, the DL model effectively predicted DFS with a C-index of 0.85 (95% CI: 0.80-0.89) in the internal testing set, significantly outperforming the clinical model (C-index: 0.76). Based on the DL model, 68 patients in the SBRT cohort identified as high-risk had significantly worse DFS compared to the low-risk group (p < 0.01, 5-year DFS rate: 34.7% vs 77.4%). The DL-score was demonstrated to be an independent predictor of DFS in both cohorts (p < 0.01). The CT-based DL model improved DFS prediction in clinical stage I lung cancer patients, identifying populations at high risk of recurrence and metastasis to guide clinical decision-making. Question The recurrence or metastasis rate of early-stage lung cancer remains high and varies among patients following radical treatments such as surgery or SBRT. Findings This CT-based DL model successfully predicted DFS and stratified varying disease risks in clinical stage I lung cancer patients undergoing surgery or SBRT. Clinical relevance The CT-based DL model is a reliable predictive tool for the prognosis of early-stage lung cancer. Its accurate risk stratification assists clinicians in identifying specific patients for personalized clinical decision making.

Machine Learning-Based Prediction of Delayed Neurological Sequelae in Carbon Monoxide Poisoning Using Automatically Extracted MR Imaging Features.

Lee GY, Sohn CH, Kim D, Jeon SB, Yun J, Ham S, Nam Y, Yum J, Kim WY, Kim N

pubmed logopapersJun 12 2025
Delayed neurological sequelae are among the most serious complications of carbon monoxide poisoning. However, no reliable tools are available for evaluating its potential risk. We aimed to assess whether machine learning models using imaging features that were automatically extracted from brain MRI can predict the potential delayed neurological sequelae risk in patients with acute carbon monoxide poisoning. This single-center, retrospective, observational study analyzed a prospectively collected registry of acute carbon monoxide poisoning patients who visited our emergency department from April 2011 to December 2015. Overall, 1618 radiomics and 4 lesion-segmentation features from DWI b1000 and ADC images, as well as 62 clinical variables were extracted from each patient. The entire dataset was divided into five subsets, with one serving as the hold-out test set and the remaining four used for training and tuning. Four machine learning models, linear regression, support vector machine, random forest, and extreme gradient boosting, as well as an ensemble model, were trained and evaluated using 20 different data configurations. The primary evaluation metric was the mean and 95% CI of the area under the receiver operating characteristic curve. Shapley additive explanations were calculated and visualized to enhance model interpretability. Of the 373 patients, delayed neurological sequelae occurred in 99 (26.5%) patients (mean age 43.0 ± 15.2; 62.0% male). The means [95% CIs] of the area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity of the best performing machine learning model for predicting the development of delayed neurological sequelae were 0.88 [0.86-0.9], 0.82 [0.8-0.83], 0.81 [0.79-0.83], and 0.82 [0.8-0.84], respectively. Among imaging features, the presence, size, and number of acute brain lesions on DWI b1000 and ADC images more accurately predicted DNS risk than advanced radiomics features based on shape, texture and wavelet transformation. Machine learning models developed using automatically extracted brain MRI features with clinical features can distinguish patients at delayed neurological sequelae risk. The models enable effective prediction of delayed neurological sequelae in patients with acute carbon monoxide poisoning, facilitating timely treatment planning for prevention. ABL = Acute brain lesion; AUROC = area under the receiver operating characteristic curve; CO = carbon monoxide; DNS = delayed neurological sequelae; LR = logistic regression; ML = machine learning; RF = random forest; SVM = support vector machine; XGBoost = extreme gradient boosting.

Accelerated MRI in temporomandibular joints using AI-assisted compressed sensing technique: a feasibility study.

Ye Z, Lyu X, Zhao R, Fan P, Yang S, Xia C, Li Z, Xiong X

pubmed logopapersJun 12 2025
To investigate the feasibility of accelerated MRI with artificial intelligence-assisted compressed sensing (ACS) technique in the temporomandibular joint (TMJ) and compare its performance with parallel imaging (PI) protocol and standard (STD) protocol. Participants with TMJ-related symptoms were prospectively enrolled from April 2023 to May 2024, and underwent bilateral TMJ imaging examinations using ACS protocol (6:08 min), PI protocol (10:57 min), and STD protocol (13:28 min). Overall image quality and visibility of TMJ relevant structures were qualitatively evaluated by a 4-point Likert scale. Quantitative analysis of signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of TMJ disc, condyle, and lateral pterygoid muscle (LPM) was performed. Diagnostic agreement of joint effusion and disc displacement among protocols and investigators was assessed by Fleiss' kappa analysis. A total of 51 participants (16 male and 35 female) with 102 TMJs were included. The overall image quality and most structures of the ACS protocol were significantly higher than the STD protocol (all p < 0.05), and similar to the PI protocol. For quantitative analysis, the ACS protocol demonstrated significantly higher SNR and CNR than the STD protocol in the TMJ disc, condyle, and LPM (all p < 0.05), and the ACS protocol showed comparable SNR to the PI protocol in most sequences. Good to excellent inter-protocol and inter-observer agreement was observed for diagnosing TMJ abnormalities (κ = 0.699-1.000). Accelerated MRI with ACS technique can significantly reduce the acquisition time of TMJ, while providing superior or equivalent image quality and great diagnostic agreement with PI and STD protocols. Question Patients with TMJ disorders often cannot endure long MRI examinations due to orofacial pain, necessitating accelerated MRI to improve patient comfort. Findings ACS technique can significantly reduce acquisition time in TMJ imaging while providing superior or equivalent image quality. Clinical relevance The time-saving ACS technique improves image quality and achieves excellent diagnostic agreement in the evaluation of joint effusion and disc displacement. It helps optimize clinical MRI workflow in patients with TMJ disorders.

OneTouch Automated Photoacoustic and Ultrasound Imaging of Breast in Standing Pose.

Zhang H, Zheng E, Zheng W, Huang C, Xi Y, Cheng Y, Yu S, Chakraborty S, Bonaccio E, Takabe K, Fan XC, Xu W, Xia J

pubmed logopapersJun 12 2025
We developed an automated photoacoustic and ultrasound breast tomography system that images the patient in the standing pose. The system, named OneTouch-PAT, utilized linear transducer arrays with optical-acoustic combiners for effective dual-modal imaging. During scanning, subjects only need to gently attach their breasts to the imaging window, and co-registered three-dimensional ultrasonic and photoacoustic images of the breast can be obtained within one minute. Our system has a large field of view of 17 cm by 15 cm and achieves an imaging depth of 3 cm with sub-millimeter resolution. A three-dimensional deep-learning network was also developed to further improve the image quality by improving the 3D resolution, enhancing vasculature, eliminating skin signals, and reducing noise. The performance of the system was tested on four healthy subjects and 61 patients with breast cancer. Our results indicate that the ultrasound structural information can be combined with the photoacoustic vascular information for better tissue characterization. Representative cases from different molecular subtypes have indicated different photoacoustic and ultrasound features that could potentially be used for imaging-based cancer classification. Statistical analysis among all patients indicates that the regional photoacoustic intensity and vessel branching points are indicators of breast malignancy. These promising results suggest that our system could significantly enhance breast cancer diagnosis and classification.

Simulation-free workflow for lattice radiation therapy using deep learning predicted synthetic computed tomography: A feasibility study.

Zhu L, Yu NY, Ahmed SK, Ashman JB, Toesca DS, Grams MP, Deufel CL, Duan J, Chen Q, Rong Y

pubmed logopapersJun 12 2025
Lattice radiation therapy (LRT) is a form of spatially fractionated radiation therapy that allows increased total dose delivery aiming for improved treatment response without an increase in toxicities, commonly utilized for palliation of bulky tumors. The LRT treatment planning process is complex, while eligible patients often have an urgent need for expedited treatment start. In this study, we aimed to develop a simulation-free workflow for volumetric modulated arc therapy (VMAT)-based LRT planning via deep learning-predicted synthetic CT (sCT) to expedite treatment initiation. Two deep learning models were initially trained using 3D U-Net architecture to generate sCT from diagnostic CTs (dCT) of the thoracic and abdomen regions using a training dataset of 50 patients. The models were then tested on an independent dataset of 15 patients using image similarity analysis assessing mean absolute error (MAE) and structural similarity index measure (SSIM) as metrics. VMAT-based LRT plans were generated based on sCT and recalculated on the planning CT (pCT) for dosimetric accuracy comparison. Differences in dose volume histogram (DVH) metrics between pCT and sCT plans were assessed using the Wilcoxon signed-rank test. The final sCT prediction model demonstrated high image similarity to pCT, with a MAE and SSIM of 38.93 ± 14.79 Hounsfield Units (HU) and 0.92 ± 0.05 for the thoracic region, and 73.60 ± 22.90 HU and 0.90 ± 0.03 for the abdominal region, respectively. There were no statistically significant differences between sCT and pCT plans in terms of organ-at-risk and target volume DVH parameters, including maximum dose (Dmax), mean dose (Dmean), dose delivered to 90% (D90%) and 50% (D50%) of target volume, except for minimum dose (Dmin) and (D10%). With demonstrated high image similarity and adequate dose agreement between sCT and pCT, our study is a proof-of-concept for using deep learning predicted sCT for a simulation-free treatment planning workflow for VMAT-based LRT.

Med-URWKV: Pure RWKV With ImageNet Pre-training For Medical Image Segmentation

Zhenhuan Zhou

arxiv logopreprintJun 12 2025
Medical image segmentation is a fundamental and key technology in computer-aided diagnosis and treatment. Previous methods can be broadly classified into three categories: convolutional neural network (CNN) based, Transformer based, and hybrid architectures that combine both. However, each of them has its own limitations, such as restricted receptive fields in CNNs or the computational overhead caused by the quadratic complexity of Transformers. Recently, the Receptance Weighted Key Value (RWKV) model has emerged as a promising alternative for various vision tasks, offering strong long-range modeling capabilities with linear computational complexity. Some studies have also adapted RWKV to medical image segmentation tasks, achieving competitive performance. However, most of these studies focus on modifications to the Vision-RWKV (VRWKV) mechanism and train models from scratch, without exploring the potential advantages of leveraging pre-trained VRWKV models for medical image segmentation tasks. In this paper, we propose Med-URWKV, a pure RWKV-based architecture built upon the U-Net framework, which incorporates ImageNet-based pretraining to further explore the potential of RWKV in medical image segmentation tasks. To the best of our knowledge, Med-URWKV is the first pure RWKV segmentation model in the medical field that can directly reuse a large-scale pre-trained VRWKV encoder. Experimental results on seven datasets demonstrate that Med-URWKV achieves comparable or even superior segmentation performance compared to other carefully optimized RWKV models trained from scratch. This validates the effectiveness of using a pretrained VRWKV encoder in enhancing model performance. The codes will be released.

AI-based identification of patients who benefit from revascularization: a multicenter study

Zhang, W., Miller, R. J., Patel, K., Shanbhag, A., Liang, J., Lemley, M., Ramirez, G., Builoff, V., Yi, J., Zhou, J., Kavanagh, P., Acampa, W., Bateman, T. M., Di Carli, M. F., Dorbala, S., Einstein, A. J., Fish, M. B., Hauser, M. T., Ruddy, T., Kaufmann, P. A., Miller, E. J., Sharir, T., Martins, M., Halcox, J., Chareonthaitawee, P., Dey, D., Berman, D., Slomka, P.

medrxiv logopreprintJun 12 2025
Background and AimsRevascularization in stable coronary artery disease often relies on ischemia severity, but we introduce an AI-driven approach that uses clinical and imaging data to estimate individualized treatment effects and guide personalized decisions. MethodsUsing a large, international registry from 13 centers, we developed an AI model to estimate individual treatment effects by simulating outcomes under alternative therapeutic strategies. The model was trained on an internal cohort constructed using 1:1 propensity score matching to emulate randomized controlled trials (RCTs), creating balanced patient pairs in which only the treatment strategy--early revascularization (defined as any procedure within 90 days of MPI) versus medical therapy--differed. This design allowed the model to estimate individualized treatment effects, forming the basis for counterfactual reasoning at the patient level. We then derived the AI-REVASC score, which quantifies the potential benefit, for each patient, of early revascularization. The score was validated in the held-out testing cohort using Cox regression. ResultsOf 45,252 patients, 19,935 (44.1%) were female, median age 65 (IQR: 57-73). During a median follow-up of 3.6 years (IQR: 2.7-4.9), 4,323 (9.6%) experienced MI or death. The AI model identified a group (n=1,335, 5.9%) that benefits from early revascularization with a propensity-adjusted hazard ratio of 0.50 (95% CI: 0.25-1.00). Patients identified for early revascularization had higher prevalence of hypertension, diabetes, dyslipidemia, and lower LVEF. ConclusionsThis study pioneers a scalable, data-driven approach that emulates randomized trials using retrospective data. The AI-REVASC score enables precision revascularization decisions where guidelines and RCTs fall short. Graphical Abstract O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=104 SRC="FIGDIR/small/25329295v1_ufig1.gif" ALT="Figure 1"> View larger version (31K): [email protected]@1df75d8org.highwire.dtl.DTLVardef@1b1ce68org.highwire.dtl.DTLVardef@663cdf_HPS_FORMAT_FIGEXP M_FIG C_FIG

MedSeg-R: Reasoning Segmentation in Medical Images with Multimodal Large Language Models

Yu Huang, Zelin Peng, Yichen Zhao, Piao Yang, Xiaokang Yang, Wei Shen

arxiv logopreprintJun 12 2025
Medical image segmentation is crucial for clinical diagnosis, yet existing models are limited by their reliance on explicit human instructions and lack the active reasoning capabilities to understand complex clinical questions. While recent advancements in multimodal large language models (MLLMs) have improved medical question-answering (QA) tasks, most methods struggle to generate precise segmentation masks, limiting their application in automatic medical diagnosis. In this paper, we introduce medical image reasoning segmentation, a novel task that aims to generate segmentation masks based on complex and implicit medical instructions. To address this, we propose MedSeg-R, an end-to-end framework that leverages the reasoning abilities of MLLMs to interpret clinical questions while also capable of producing corresponding precise segmentation masks for medical images. It is built on two core components: 1) a global context understanding module that interprets images and comprehends complex medical instructions to generate multi-modal intermediate tokens, and 2) a pixel-level grounding module that decodes these tokens to produce precise segmentation masks and textual responses. Furthermore, we introduce MedSeg-QA, a large-scale dataset tailored for the medical image reasoning segmentation task. It includes over 10,000 image-mask pairs and multi-turn conversations, automatically annotated using large language models and refined through physician reviews. Experiments show MedSeg-R's superior performance across several benchmarks, achieving high segmentation accuracy and enabling interpretable textual analysis of medical images.

DUN-SRE: Deep Unrolling Network with Spatiotemporal Rotation Equivariance for Dynamic MRI Reconstruction

Yuliang Zhu, Jing Cheng, Qi Xie, Zhuo-Xu Cui, Qingyong Zhu, Yuanyuan Liu, Xin Liu, Jianfeng Ren, Chengbo Wang, Dong Liang

arxiv logopreprintJun 12 2025
Dynamic Magnetic Resonance Imaging (MRI) exhibits transformation symmetries, including spatial rotation symmetry within individual frames and temporal symmetry along the time dimension. Explicit incorporation of these symmetry priors in the reconstruction model can significantly improve image quality, especially under aggressive undersampling scenarios. Recently, Equivariant convolutional neural network (ECNN) has shown great promise in exploiting spatial symmetry priors. However, existing ECNNs critically fail to model temporal symmetry, arguably the most universal and informative structural prior in dynamic MRI reconstruction. To tackle this issue, we propose a novel Deep Unrolling Network with Spatiotemporal Rotation Equivariance (DUN-SRE) for Dynamic MRI Reconstruction. The DUN-SRE establishes spatiotemporal equivariance through a (2+1)D equivariant convolutional architecture. In particular, it integrates both the data consistency and proximal mapping module into a unified deep unrolling framework. This architecture ensures rigorous propagation of spatiotemporal rotation symmetry constraints throughout the reconstruction process, enabling more physically accurate modeling of cardiac motion dynamics in cine MRI. In addition, a high-fidelity group filter parameterization mechanism is developed to maintain representation precision while enforcing symmetry constraints. Comprehensive experiments on Cardiac CINE MRI datasets demonstrate that DUN-SRE achieves state-of-the-art performance, particularly in preserving rotation-symmetric structures, offering strong generalization capability to a broad range of dynamic MRI reconstruction tasks.
Page 273 of 3493486 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.