Sort by:
Page 17 of 1471465 results

First experiences with an adaptive pelvic radiotherapy system: Analysis of treatment times and learning curve.

Benzaquen D, Taussky D, Fave V, Bouveret J, Lamine F, Letenneur G, Halley A, Solmaz Y, Champion A

pubmed logopapersJun 16 2025
The Varian Ethos system allows not only on-treatment-table plan adaptation but also automated contouring with the aid of artificial intelligence. This study evaluates the initial clinical implementation of an adaptive pelvic radiotherapy system, focusing on the treatment times and the associated learning curve. We analyzed the data from 903 consecutive treatments for most urogenital cancers at our center. The treatment time was calculated from the time of the first cone-beam computed tomography scan used for replanning until the end of treatment. To calculate whether treatments were generally shorter over time, we divided the date of the first treatment into 3-months quartiles. Differences between the groups were calculated using t-tests. The mean time from the first cone-beam computed tomography scan to the end of treatment was 25.9min (standard deviation: 6.9min). Treatment time depended on the number of planning target volumes and treatment of the pelvic lymph nodes. The mean time from cone-beam computed tomography to the end of treatment was 37 % longer if the pelvic lymph nodes were treated and 26 % longer if there were more than two planning target volumes. There was a learning curve: in linear regression analysis, both quartiles of months of treatment (odds ratio [OR]: 1.3, 95 % confidence interval [CI]: 1.8-0.70, P<0.001) and the number of planning target volumes (OR: 3.0, 95 % CI: 2.6-3.4, P<0.001) were predictive of treatment time. Approximately two-thirds of the treatments were delivered within 33min. Treatment time was strongly dependent on the number of separate planning target volumes. There was a continuous learning curve.

Kernelized weighted local information based picture fuzzy clustering with multivariate coefficient of variation and modified total Bregman divergence measure for brain MRI image segmentation.

Lohit H, Kumar D

pubmed logopapersJun 16 2025
This paper proposes a novel clustering method for noisy image segmentation using a kernelized weighted local information approach under the Picture Fuzzy Set (PFS) framework. Existing kernel-based fuzzy clustering methods struggle with noisy environments and non-linear structures, while intuitionistic fuzzy clustering methods face limitations in handling uncertainty in real-world medical images. To address these challenges, we introduce a local picture fuzzy information measure, developed for the first time using Multivariate Coefficient of Variation (MCV) theory, enhancing robustness in segmentation. Additionally, we integrate non-Euclidean distance measures, including kernel distance for local information computation and modified total Bregman divergence (MTBD) measure for improving clustering accuracy. This combination enhances both local spatial consistency and global membership estimation, leading to precise segmentation. The proposed method is extensively evaluated on synthetic images with Gaussian, Salt and Pepper, and mixed noise, along with Brainweb, IBSR, and MRBrainS18 MRI datasets under varying Rician noise levels, and a CT image template. Furthermore, we benchmark our proposed method against two deep learning-based segmentation models, ResNet34-LinkNet and patch-based U-Net. Experimental results demonstrate significant improvements in segmentation accuracy, as validated by metrics such as Dice Score, Fuzzy Performance Index, Modified Partition Entropy, Average Volume Difference (AVD), and the XB index. Additionally, Friedman's statistical test confirms the superior performance of our approach compared to state-of-the-art clustering methods for noisy image segmentation. To facilitate reproducibility, the implementation of our proposed method is made publicly available at: Google Drive Repository.

Roadmap analysis for coronary artery stenosis detection and percutaneous coronary intervention prediction in cardiac CT for transcatheter aortic valve replacement.

Fujito H, Jilaihawi H, Han D, Gransar H, Hashimoto H, Cho SW, Lee S, Gheyath B, Park RH, Patel D, Guo Y, Kwan AC, Hayes SW, Thomson LEJ, Slomka PJ, Dey D, Makkar R, Friedman JD, Berman DS

pubmed logopapersJun 16 2025
The new artificial intelligence-based software, Roadmap (HeartFlow), may assist in evaluating coronary artery stenosis during cardiac computed tomography (CT) for transcatheter aortic valve replacement (TAVR). Consecutive TAVR candidates who underwent both cardiac CT angiography (CTA) and invasive coronary angiography were enrolled. We evaluated the ability of three methods to predict obstructive coronary artery disease (CAD), defined as ≥50 ​% stenosis on quantitative coronary angiography (QCA), and the need for percutaneous coronary intervention (PCI) within one year: Roadmap, clinician CT specialists with Roadmap, and CT specialists alone. The area under the curve (AUC) for predicting QCA ≥50 ​% stenosis was similar for CT specialists with or without Roadmap (0.93 [0.85-0.97] vs. 0.94 [0.88-0.98], p ​= ​0.82), both significantly higher than Roadmap alone (all p ​< ​0.05). For PCI prediction, no significant differences were found between QCA and CT specialists, with or without Roadmap, while Roadmap's AUC was lower (all p ​< ​0.05). The negative predictive value (NPV) of CT specialists with Roadmap for ≥50 ​% stenosis was 97 ​%, and for PCI prediction, the NPV was comparable to QCA (p ​= ​1.00). In contrast, the positive predictive value (PPV) of Roadmap alone for ≥50 ​% stenosis was 49 ​%, the lowest among all approaches, with a similar trend observed for PCI prediction. While Roadmap alone is insufficient for clinical decision-making due to low PPV, Roadmap may serve as a "second observer", providing a supportive tool for CT specialists by flagging lesions for careful review, thereby enhancing workflow efficiency and maintaining high diagnostic accuracy with excellent NPV.

Precision Medicine and Machine Learning to predict critical disease and death due to Coronavirus disease 2019 (COVID-19).

Júnior WLDT, Danelli T, Tano ZN, Cassela PLCS, Trigo GL, Cardoso KM, Loni LP, Ahrens TM, Espinosa BR, Fernandes AJ, Almeida ERD, Lozovoy MAB, Reiche EMV, Maes M, Simão ANC

pubmed logopapersJun 16 2025
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) causes Coronavirus Disease 2019 (COVID-19) and induces activation of inflammatory pathways, including the inflammasome. The aim was to construct Machine Learning (ML) models to predict critical disease and death in patients with COVID-19. A total of 528 individuals with SARS-CoV-2 infection were included, comprising 308 with critical and 220 with non-critical COVID-19. The ML models included imaging, demographic, inflammatory biomarkers, NLRP3 (rs10754558 and rs10157379) and IL18 (rs360717 and rs187238) inflammasome variants. Individuals with critical COVID-19 were older, higher male/female ratio, body mass index (BMI), rate of type 2 diabetes mellitus (T2DM), hypertension, inflammatory biomarkers, need of orotracheal intubation, intensive care unit admission, incidence of death, and sickness symptom complex (SSC) scores and lower peripheral oxygen saturation (SpO<sub>2</sub>) compared to those with non-critical disease. We found that 49.5 % of the variance in the severity of critical COVID-19 was explained by SpO<sub>2</sub> and SSC (negatively associated), chest computed tomography alterations (CCTA), inflammatory biomarkers, severe acute respiratory syndrome (SARS), BMI, T2DM, and age (positively associated). In this model, the NLRP3/IL18 variants showed indirect effects on critical COVID-19 that were mediated by inflammatory biomarkers, SARS, and SSC. Neural network models yielded a prediction of critical disease and death due to COVID-19 with an area under the receiving operating characteristic curve of 0.930 and 0.927, respectively. These ML methods increase the accuracy of predicting severity, critical illness, and mortality caused by COVID-19 and show that the genetic variants contribute to the predictive power of the ML models.

TCFNet: Bidirectional face-bone transformation via a Transformer-based coarse-to-fine point movement network.

Zhang R, Jie B, He Y, Wang J

pubmed logopapersJun 16 2025
Computer-aided surgical simulation is a critical component of orthognathic surgical planning, where accurately simulating face-bone shape transformations is significant. The traditional biomechanical simulation methods are limited by their computational time consumption levels, labor-intensive data processing strategies and low accuracy. Recently, deep learning-based simulation methods have been proposed to view this problem as a point-to-point transformation between skeletal and facial point clouds. However, these approaches cannot process large-scale points, have limited receptive fields that lead to noisy points, and employ complex preprocessing and postprocessing operations based on registration. These shortcomings limit the performance and widespread applicability of such methods. Therefore, we propose a Transformer-based coarse-to-fine point movement network (TCFNet) to learn unique, complicated correspondences at the patch and point levels for dense face-bone point cloud transformations. This end-to-end framework adopts a Transformer-based network and a local information aggregation network (LIA-Net) in the first and second stages, respectively, which reinforce each other to generate precise point movement paths. LIA-Net can effectively compensate for the neighborhood precision loss of the Transformer-based network by modeling local geometric structures (edges, orientations and relative position features). The previous global features are employed to guide the local displacement using a gated recurrent unit. Inspired by deformable medical image registration, we propose an auxiliary loss that can utilize expert knowledge for reconstructing critical organs. Our framework is an unsupervised algorithm, and this loss is optional. Compared with the existing state-of-the-art (SOTA) methods on gathered datasets, TCFNet achieves outstanding evaluation metrics and visualization results. The code is available at https://github.com/Runshi-Zhang/TCFNet.

Three-dimensional multimodal imaging for predicting early recurrence of hepatocellular carcinoma after surgical resection.

Peng J, Wang J, Zhu H, Jiang P, Xia J, Cui H, Hong C, Zeng L, Li R, Li Y, Liang S, Deng Q, Deng H, Xu H, Dong H, Xiao L, Liu L

pubmed logopapersJun 16 2025
High tumor recurrence after surgery remains a significant challenge in managing hepatocellular carcinoma (HCC). We aimed to construct a multimodal model to forecast the early recurrence of HCC after surgical resection and explore the associated biological mechanisms. Overall, 519 patients with HCC were included from three medical centers. 433 patients from Nanfang Hospital were used as the training cohort, and 86 patients from the other two hospitals comprised validation cohort. Radiomics and deep learning (DL) models were developed using contrast-enhanced computed tomography images. Radiomics feature visualization and gradient-weighted class activation mapping were applied to improve interpretability. A multimodal model (MM-RDLM) was constructed by integrating radiomics and DL models. Associations between MM-RDLM and recurrence-free survival (RFS) and overall survival were analyzed. Gene set enrichment analysis (GSEA) and multiplex immunohistochemistry (mIHC) were used to investigate the biological mechanisms. Models based on hepatic arterial phase images exhibited the best predictive performance, with radiomics and DL models achieving areas under the curve (AUCs) of 0.770 (95 % confidence interval [CI]: 0.725-0.815) and 0.846 (95 % CI: 0.807-0.886), respectively, in the training cohort. MM-RDLM achieved an AUC of 0.955 (95 % CI: 0.937-0.972) in the training cohort and 0.930 (95 % CI: 0.876-0.984) in the validation cohort. MM-RDLM (high vs. low) was notably linked to RFS in the training (hazard ratio [HR] = 7.80 [5.74 - 10.61], P < 0.001) and validation (HR = 10.46 [4.96 - 22.68], P < 0.001) cohorts. GSEA revealed enrichment of the natural killer cell-mediated cytotoxicity pathway in the MM-RDLM low cohort. mIHC showed significantly higher percentages of CD3-, CD56-, and CD8-positive cells in the MM-RDLM low group. The MM-RDLM model demonstrated strong predictive performance for early postoperative recurrence of HCC. These findings contribute to identifying patients at high risk for early recurrence and provide insights into the potential underlying biological mechanisms.

AN INNOVATIVE MACHINE LEARNING-BASED ALGORITHM FOR DIAGNOSING PEDIATRIC OVARIAN TORSION.

Boztas AE, Sencan E, Payza AD, Sencan A

pubmed logopapersJun 16 2025
We aimed to develop a machine-learning(ML) algorithm consisting of physical examination, sonographic findings, and laboratory markers. The data of 70 patients with confirmed ovarian torsion followed and treated in our clinic for ovarian torsion and 73 patients for control group that presented to the emergency department with similar complaints but didn't have ovarian torsion detected on ultrasound as the control group between 2013-2023 were retrospectively analyzed. Sonographic findings, laboratory values, and clinical status of patients were examined and fed into three supervised ML systems to identify and develop viable decision algorithms. Presence of nausea/vomiting and symptom duration was statistically significant(p<0.05) for ovarian torsion. Presence of abdominal pain and palpable mass on physical examination weren't significant(p>0.05). White blood cell count(WBC), neutrophile/lymphocyte ratio(NLR), systemic immune-inflammation index(SII) and systemic inflammation response index(SIRI), high values of C-reactive protein was highly significant in prediction of torsion( p<0.001,p<0.05). Ovarian size ratio, medialization, follicular ring sign, presence of free fluid in pelvis in ultrasound demonstrated statistical significance in the torsion group(p<0.001). We used supervised ML algorithms, including decision trees, random forests, and LightGBM, to classify patients as either control or having torsion. We evaluated the models using 5-fold cross-validation, achieving an average F1-score of 98%, an accuracy of 98%, and a specificity of 100% across each fold with the decision tree model. This study represents the first development of a ML algorithm that integrates clinical, laboratory and ultrasonographic findings for the diagnosis of pediatric ovarian torsion with over 98% accuracy.

Reaction-Diffusion Model for Brain Spacetime Dynamics.

Li Q, Calhoun VD

pubmed logopapersJun 16 2025
The human brain exhibits intricate spatiotemporal dynamics, which can be described and understood through the framework of complex dynamic systems theory. In this study, we leverage functional magnetic resonance imaging (fMRI) data to investigate reaction-diffusion processes in the brain. A reaction-diffusion process refers to the interaction between two or more substances that spread through space and react with each other over time, often resulting in the formation of patterns or waves of activity. Building on this empirical foundation, we apply a reaction-diffusion framework inspired by theoretical physics to simulate the emergence of brain spacetime vortices within the brain. By exploring this framework, we investigate how reaction-diffusion processes can serve as a compelling model to govern the formation and propagation of brain spacetime vortices, which are dynamic, swirling patterns of brain activity that emerge and evolve across both time and space within the brain. Our approach integrates computational modeling with fMRI data to investigate the spatiotemporal properties of these vortices, offering new insights into the fundamental principles of brain organization. This work highlights the potential of reaction-diffusion models as an alternative framework for understanding brain spacetime dynamics.

Appropriateness of acute breast symptom recommendations provided by ChatGPT.

Byrd C, Kingsbury C, Niell B, Funaro K, Bhatt A, Weinfurtner RJ, Ataya D

pubmed logopapersJun 16 2025
We evaluated the accuracy of ChatGPT-3.5's responses to common questions regarding acute breast symptoms and explored whether using lay language, as opposed to medical language, affected the accuracy of the responses. Questions were formulated addressing acute breast conditions, informed by the American College of Radiology (ACR) Appropriateness Criteria (AC) and our clinical experience at a tertiary referral breast center. Of these, seven addressed the most common acute breast symptoms, nine addressed pregnancy-associated breast symptoms, and four addressed specific management and imaging recommendations for a palpable breast abnormality. Questions were submitted three times to ChatGPT-3.5 and all responses were assessed by five fellowship-trained breast radiologists. Evaluation criteria included clinical judgment and adherence to the ACR guidelines, with responses scored as: 1) "appropriate," 2) "inappropriate" if any response contained inappropriate information, or 3) "unreliable" if responses were inconsistent. A majority vote determined the appropriateness for each question. ChatGPT-3.5 generated responses were appropriate for 7/7 (100 %) questions regarding common acute breast symptoms when phrased both colloquially and using standard medical terminology. In contrast, ChatGPT-3.5 generated responses were appropriate for 3/9 (33 %) questions about pregnancy-associated breast symptoms and 3/4 (75 %) questions about management and imaging recommendations for a palpable breast abnormality. ChatGPT-3.5 can automate healthcare information related to appropriate management of acute breast symptoms when prompted with both standard medical terminology or lay phrasing of the questions. However, physician oversight remains critical given the presence of inappropriate recommendations for pregnancy associated breast symptoms and management of palpable abnormalities.

Classification of glioma grade and Ki-67 level prediction in MRI data: A SHAP-driven interpretation.

Bhuiyan EH, Khan MM, Hossain SA, Rahman R, Luo Q, Hossain MF, Wang K, Sumon MSI, Khalid S, Karaman M, Zhang J, Chowdhury MEH, Zhu W, Zhou XJ

pubmed logopapersJun 16 2025
This study focuses on artificial intelligence-driven classification of glioma and Ki-67 leveling using T2w-FLAIR MRI, exploring the association of Ki-67 biomarkers with deep learning (DL) features through explainable artificial intelligence (XAI) and SHapley Additive exPlanations (SHAP). This IRB-approved study included 101 patients with glioma brain tumor acquired MR images with the T2W-FLAIR sequence. We extracted DL bottleneck features using ResNet50 from glioma MR images. Principal component analysis (PCA) was deployed for dimensionality reduction. XAI was used to identify potential features. The XGBosst classified the histologic grades of the glioma and the level of Ki-67. We integrated potential DL features with patient demographics (age and sex) and Ki-67 biomarkers, utilizing SHAP to determine the model's essential features and interactions. Glioma grade classification and Ki-67 level predictions achieved overall accuracies of 0.94 and 0.91, respectively. It achieved precision scores of 0.92, 0.94, and 0.96 for glioma grades 2, 3, and 4, and 0.88, 0.94, and 0.97 for Ki-67 levels (low: 5%≤Ki-67<10%, moderate: 10%≤Ki-67≤20, and high: Ki-67>20%). Corresponding F1-scores were 0.95, 0.88, and 0.96 for glioma grades and 0.92, 0.93, and 0.87 for Ki-67 levels. SHAP analysis further highlighted a strong association between bottleneck DL features and Ki-67 biomarkers, demonstrating their potential to differentiate glioma grades and Ki-67 levels while offering valuable insights into glioma aggressiveness. This study demonstrates the precise classification of glioma grades and the prediction of Ki-67 levels to underscore the potential of AI-driven MRI analysis to enhance clinical decision-making in glioma management.
Page 17 of 1471465 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.