Sort by:
Page 245 of 6576562 results

Max Krähenmann, Sergio Tascon-Morales, Fabian Laumer, Julia E. Vogt, Ece Ozkan

arxiv logopreprintAug 20 2025
Volumetric ultrasound has the potential to significantly improve diagnostic accuracy and clinical decision-making, yet its widespread adoption remains limited by dependence on specialized hardware and restrictive acquisition protocols. In this work, we present a novel unsupervised framework for reconstructing 3D anatomical structures from freehand 2D transvaginal ultrasound (TVS) sweeps, without requiring external tracking or learned pose estimators. Our method adapts the principles of Gaussian Splatting to the domain of ultrasound, introducing a slice-aware, differentiable rasterizer tailored to the unique physics and geometry of ultrasound imaging. We model anatomy as a collection of anisotropic 3D Gaussians and optimize their parameters directly from image-level supervision, leveraging sensorless probe motion estimation and domain-specific geometric priors. The result is a compact, flexible, and memory-efficient volumetric representation that captures anatomical detail with high spatial fidelity. This work demonstrates that accurate 3D reconstruction from 2D ultrasound images can be achieved through purely computational means, offering a scalable alternative to conventional 3D systems and enabling new opportunities for AI-assisted analysis and diagnosis.

Runshi Zhang, Bimeng Jie, Yang He, Junchen Wang

arxiv logopreprintAug 20 2025
Computer-aided surgical simulation is a critical component of orthognathic surgical planning, where accurately simulating face-bone shape transformations is significant. The traditional biomechanical simulation methods are limited by their computational time consumption levels, labor-intensive data processing strategies and low accuracy. Recently, deep learning-based simulation methods have been proposed to view this problem as a point-to-point transformation between skeletal and facial point clouds. However, these approaches cannot process large-scale points, have limited receptive fields that lead to noisy points, and employ complex preprocessing and postprocessing operations based on registration. These shortcomings limit the performance and widespread applicability of such methods. Therefore, we propose a Transformer-based coarse-to-fine point movement network (TCFNet) to learn unique, complicated correspondences at the patch and point levels for dense face-bone point cloud transformations. This end-to-end framework adopts a Transformer-based network and a local information aggregation network (LIA-Net) in the first and second stages, respectively, which reinforce each other to generate precise point movement paths. LIA-Net can effectively compensate for the neighborhood precision loss of the Transformer-based network by modeling local geometric structures (edges, orientations and relative position features). The previous global features are employed to guide the local displacement using a gated recurrent unit. Inspired by deformable medical image registration, we propose an auxiliary loss that can utilize expert knowledge for reconstructing critical organs.Compared with the existing state-of-the-art (SOTA) methods on gathered datasets, TCFNet achieves outstanding evaluation metrics and visualization results. The code is available at https://github.com/Runshi-Zhang/TCFNet.

Nzatsi MC, Varmenot N, Sarrut D, Delpon G, Cherel M, Rousseau C, Ferrer L

pubmed logopapersAug 20 2025
Internal vectorised therapies, particularly with [177Lu]-labelled agents, are increasingly used for metastatic prostate cancer and neuroendocrine tumours. However, routine dosimetry for organs-at-risk and tumours remains limited due to the complexity and time requirements of current protocols. We developed a Generative Adversarial Network (GAN) to transform rapid 6 s SPECT projections into synthetic 30 s-equivalent projections. SPECT data from twenty patients and phantom acquisitions were collected at multiple time-points. The GAN accurately predicted 30 s projections, enabling estimation of time-integrated activities in kidneys and liver with maximum errors below 6 % and 1 %, respectively, compared to standard acquisitions. For tumours and phantom spheres, results were more variable. On phantom data, GAN-inferred reconstructions showed lower biases for spheres of 20, 8, and 1 mL (8.2 %, 6.9 %, and 21.7 %) compared to direct 6 s acquisitions (12.4 %, 20.4 %, and 24.0 %). However, in patient lesions, 37 segmented tumours showed higher median discrepancies in cumulated activity for the GAN (15.4 %) than for the 6 s approach (4.1 %). Our preliminary results indicate that the GAN can provide reliable dosimetry for organs-at-risk, but further optimisation is needed for small lesion quantification. This approach could reduce SPECT acquisition time from 45 to 9 min for standard three-bed studies, potentially facilitating wider adoption of dosimetry in nuclear medicine and addressing challenges related to toxicity and cumulative absorbed doses in personalised radiopharmaceutical therapy.

Jansen GE, Molenaar MA, Schuuring MJ, Bouma BJ, Išgum I

pubmed logopapersAug 20 2025
Accurate segmentation of the mitral valve in transthoracic echocardiography (TTE) enables the extraction of various anatomical parameters that are important for guiding clinical management. However, manual mitral valve segmentation is time-consuming and prone to interobserver variability. To support robust automatic analysis of mitral valve anatomy, we propose a novel AI-based method for mitral valve segmentation and anatomical measurement extraction. We retrospectively collected a set of echocardiographic exams from 1756 consecutive patients with suspected coronary artery disease. For these patients, we retrieved expert-defined scores for mitral regurgitation (MR) severity and follow-up characteristics. PLAX-view videos were automatically identified, and the inside border of the mitral valve leaflets were manually segmented in 182 patients. To automatically segment mitral valve leaflets, we designed a deep neural network that takes a video frame and outputs a distance- and classification-map for each leaflet, supervised by manual segmentations. From the resulting automatic segmentations, we extracted leaflet length, annulus diameter, tenting area, and coaptation length. To demonstrate the clinical relevance of these automatically extracted measurements, we performed univariable and multivariable Cox Regression survival analysis, with the clinical endpoint defined as heart-failure hospitalization or all-cause mortality. We trained the segmentation model on annotated frames of 111 patients, and tested segmentation performance on a set of 71 patients. For the survival analysis, we included 1,117 patients (mean age 64.1 ± 12.4 years, 58% male, median follow-up 3.3 years). The trained model achieved an average surface distance of 0.89 mm, a Hausdorff distance of 3.34 mm, and a temporal consistency score of 97%. Additionally, leaflet coaptation was accurately detected in 93% of annotated frames. In univariable Cox regression, automated annulus diameter (>35 mm, hazard ratio (HR) = 2.38, p<0.001), tenting area (>2.4 cm<sup>2</sup>, HR = 2.48, p<0.001), tenting height (>10 mm, HR = 1.91, p<0.001), and coaptation length (>3 mm, HR = 1.53, p = 0.007) were significantly associated with the defined clinical endpoint. For reference, significant MR by expert assessment resulted in an HR of 2.31 (p<0.001). In multivariable Cox Regression analysis, automated annulus diameter and coaptation length predicted the defined endpoint as independent parameters (p = 0.03 and p = 0.05, respectively). Our method allows accurate segmentation of the mitral valve in TTE, and enables fully automated quantification of key measurements describing mitral valve anatomy. This has the potential to improve risk stratification for cardiac patients.

Park C, Choi J, Hwang J, Jeong H, Kim PH, Cho YA, Lee BS, Jung E, Kwon SH, Kim M, Jun H, Nam Y, Kim N, Yoon HM

pubmed logopapersAug 20 2025
Neonatal pneumoperitoneum is a life-threatening condition requiring prompt diagnosis, yet its subtle radiographic signs pose diagnostic challenges, especially in emergency settings. To develop and validate a deep multi-task learning model for diagnosing neonatal pneumoperitoneum on radiographs and to assess its clinical utility across clinicians of varying experience levels. Retrospective diagnostic study using internal and external datasets. Internal data were collected between January 1995 and August 2018, while external data were sourced from 11 neonatal intensive care units. Tertiary hospital and multicenter validation settings. Internal dataset: 204 neonates (546 radiographs), external dataset: 378 radiographs (125 pneumoperitoneum cases, 214 non-pneumoperitoneum cases). Radiographs were reviewed by two pediatric radiologists. A reader study involved 4 physicians with varying experience levels. A deep multi-task learning model combining classification and segmentation tasks for pneumoperitoneum detection. The primary outcomes included diagnostic accuracy, area under the receiver operating characteristic curve (AUC), and inter-reader agreement. AI-assisted and unassisted reader performance metrics were compared. The AI model achieved an AUC of 0.98 (95 % CI, 0.94-1.00) and accuracy of 94 % (95 % CI, 85.1-99.6) in internal validation, and AUC of 0.89 (95 % CI, 0.85-0.92) with accuracy of 84.1 % (95 % CI, 80.4-87.8) in external validation. AI assistance improved reader accuracy from 82.5 % to 86.6 % (p < .001) and inter-reader agreement (kappa increased from 0.33 to 0.71 to 0.54-0.86). The multi-task learning model demonstrated excellent diagnostic performance and improved clinicians' diagnostic accuracy and agreement, suggesting its potential to enhance care in neonatal intensive care settings. All code is available at https://github.com/brody9512/NEC_MTL.

Temtam A, Witherow MA, Ma L, Sadique MS, Moeller FG, Iftekharuddin KM

pubmed logopapersAug 20 2025
Understanding the neurobiology of opioid use disorder (OUD) using resting-state functional magnetic resonance imaging (rs-fMRI) may help inform treatment strategies to improve patient outcomes. Recent literature suggests time-frequency characteristics of rs-fMRI blood oxygenation level-dependent (BOLD) signals may offer complementary information to traditional analysis techniques. However, existing studies of OUD analyze BOLD signals using measures computed across all time points. This study, for the first time in the literature, employs data-driven machine learning (ML) for time-frequency analysis of local neural activity within key functional networks to differentiate OUD subjects from healthy controls (HC). We obtain time-frequency features based on rs-fMRI BOLD signals from the default mode network (DMN), salience network (SN), and executive control network (ECN) for 31 OUD and 45 HC subjects. Then, we perform 5-fold cross-validation classification (OUD vs. HC) experiments to study the discriminative power of functional network features while taking into consideration significant demographic features. ML-based time-frequency analysis of DMN, SN, and ECN significantly (p < 0.05) outperforms chance baselines for discriminative power with mean F1 scores of 0.6675, 0.7090, and 0.6810, respectively, and mean AUCs of 0.7302, 0.7603, and 0.7103, respectively. Follow-up Boruta ML analysis of selected time-frequency (wavelet) features reveals significant (p < 0.05) detail coefficients for all three functional networks, underscoring the need for ML and time-frequency analysis of rs-fMRI BOLD signals in the study of OUD.

Ramos LR, Fernandes O, Sanchez TA

pubmed logopapersAug 20 2025
Ayahuasca is an Amazonian psychedelic brew that contains dimethyltryptamine (DMT) and beta carbolines. Prolonged use has shown changes in cognitive-behavioral tasks, and in humans, there is evidence of changes in cortical thickness and an increase in neuroplasticity factors that could lead to modifications in functional neural circuits. To investigate the long-term effects of Ayahuasca usage through psychometric scales and fMRI data related to emotional processing using artificial intelligence tools. Retrospective Cross-sectional, case-control study. 38 healthy male participants (19 long-term Ayahuasca users and 19 non-user controls). 1.5 Tesla; gradient-echo T2*-weighted echo-planar imaging sequence during an implicit emotion processing task. Participants completed standardized psychometric scales including the Ego Resilience Scale (ER89). During fMRI, participants performed a gender judgment task using faces with neutral or aversive (disgust/fear) expressions. Whole-brain fMRI data were analyzed using multivariate pattern recognition. Group comparisons of psychometric scores were performed using Student's t-tests or Mann-Whitney U tests based on normality. Multivariate pattern classification and regression were performed using machine learning algorithms: Multiple Kernel Learning (MKL), Support Vector Machine (SVM), and Gaussian Process Classification/Regression (GPC/GPR), with k-fold cross-validation and permutation testing (n = 100-1000) to assess model significance (α = 0.05). Ayahuasca users (mean = 43.89; SD = 5.64) showed significantly higher resilience scores compared to controls (mean = 39.05; SD = 5.34). The MKL classifier distinguished users from controls with 75% accuracy (p = 0.005). The GPR model significantly predicted individual resilience scores (r = 0.69). Long-term Ayahuasca use may be associated with altered emotional brain reactivity and increased psychological resilience. These findings support a neural patterns consistent with long-term adaptations of Ayahuasca detectable via fMRI and machine learning-based pattern analysis. 4. Stage 1.

Lu Z, Yuan C, Xu Q, Feng Y, Xia Q, Wang X, Zhu J, Wu J, Wang T, Chen J, Wang X, Wang Q

pubmed logopapersAug 20 2025
With the growing complexity of total hip arthroplasty (THA) for high hip dislocation (HHD), artificial intelligence (AI)-assisted three-dimensional (3D) preoperative planning has emerged as a promising tool to enhance surgical accuracy. This study compared clinical outcomes of AI-assisted 3D versus conventional two-dimensional (2D) X-ray preoperative planning in such cases. A retrospective cohort of 92 patients with Crowe type II-IV HHD who underwent THA between May 2020 and January 2023 was analyzed. Patients received either AI-assisted 3D preoperative planning (n = 49) or 2D X-ray preoperative planning (n = 43). The primary outcome was the accuracy of implant size prediction. Secondary outcomes included operative time, blood loss, leg length discrepancy (LLD), implant positioning, functional scores (Harris Hip Score [HHS], WOMAC, VAS), complications, and implant survival at 24 months. At 24 months, both groups demonstrated significant improvements in functional outcomes. Compared to the 2D X-ray group, the AI-3D group showed higher accuracy in implant size prediction (acetabular cup: 59.18% vs. 30.23%; femoral stem: 65.31% vs. 41.86%; both p < 0.05), a greater proportion of cups placed within the Lewinnek and Callanan safe zones (p < 0.05), shorter operative time, reduced intraoperative blood loss, and more effective correction of leg length discrepancy (all p < 0.05). No significant differences were observed in HHS, WOMAC, or VAS scores between groups at 24 months (all p > 0.05). Implant survivorship was also comparable (100% vs. 97.7%; p = 0.283), with one revision noted in the 2D X-ray group. AI-assisted 3D preoperative planning improves prosthesis selection accuracy, implant positioning, and perioperative outcomes in Crowe type II-IV HHD THA, although 2-year functional and survival outcomes were comparable to 2D X-ray preoperative planning. Considering the higher cost, radiation exposure, and workflow complexity, its broader application warrants further investigation, particularly in identifying patients who may benefit most.

Büyüktoka RE, Surucu M, Erekli Derinkaya PB, Adibelli ZH, Salbas A, Koc AM, Buyuktoka AD, Isler Y, Ugur MA, Isiklar E

pubmed logopapersAug 20 2025
To create and test a locally adapted large language model (LLM) for automated scoring of radiology requisitions based on the reason for exam imaging reporting and data system (RI-RADS), and to evaluate its performance based on reference standards. This retrospective, double-center study included 131,683 radiology requisitions from two institutions. A bidirectional encoder representation from a transformer (BERT)-based model was trained using 101,563 requisitions from Center 1 (including 1500 synthetic examples) and externally tested on 18,887 requisitions from Center 2. The model's performance for two different classification strategies was evaluated by the reference standard created by three different radiologists. Model performance was assessed using Cohen's Kappa, accuracy, F1-score, sensitivity, and specificity with 95% confidence intervals. A total of 18,887 requisitions were evaluated for the external test set. External testing yielded a performance with an F1-score of 0.93 (95% CI: 0.912-0.943); κ = 0.88 (95% CI: 0.871-0.884). Performance was highest in common categories RI-RADS D and X (F1 ≥ 0.96) and lowest for rare categories RI-RADS A and B (F1 ≤ 0.49). When grouped into three categories (adequate, inadequate, and unacceptable), overall model performance improved [F1-score = 0.97; (95% CI: 0.96-0.97)]. The locally adapted BERT-based model demonstrated high performance and almost perfect agreement with radiologists in automated RI-RADS scoring, showing promise for integration into radiology workflows to improve requisition completeness and communication. Question Can an LLM accurately and automatically score radiology requisitions based on standardized criteria to address the challenges of incomplete information in radiological practice? Findings A locally adapted BERT-based model demonstrated high performance (F1-score 0.93) and almost perfect agreement with radiologists in automated RI-RADS scoring across a large, multi-institutional dataset. Clinical relevance LLMs offer a scalable solution for automated scoring of radiology requisitions, with the potential to improve workflow in radiology. Further improvement and integration into clinical practice could enhance communication, contributing to better diagnoses and patient care.

Mytsyk Y, Kowal P, Kobilnyk Y, Lesny M, Skrzypczyk M, Stroj D, Dosenko V, Kucheruk O

pubmed logopapersAug 20 2025
Renal cell carcinoma (RCC) is a prevalent malignancy with highly variable outcomes. MicroRNA-15a (miR-15a) has emerged as a promising prognostic biomarker in RCC, linked to angiogenesis, apoptosis, and proliferation. Radiogenomics integrates radiological features with molecular data to non-invasively predict biomarkers, offering valuable insights for precision medicine. This study aimed to develop a machine learning-assisted radiogenomic model to predict miR-15a expression in RCC. A retrospective analysis was conducted on 64 RCC patients who underwent preoperative multiphase contrast-enhanced CT or MRI. Radiological features, including tumor size, necrosis, and nodular enhancement, were evaluated. MiR-15a expression was quantified using real-time qPCR from archived tissue samples. Polynomial regression and Random Forest models were employed for prediction, and hierarchical clustering with K-means analysis was used for phenotypic stratification. Statistical significance was assessed using non-parametric tests and machine learning performance metrics. Tumor size was the strongest radiological predictor of miR-15a expression (adjusted R<sup>2</sup> = 0.8281, p < 0.001). High miR-15a levels correlated with aggressive features, including necrosis and nodular enhancement (p < 0.05), while lower levels were associated with cystic components and macroscopic fat. The Random Forest regression model explained 65.8% of the variance in miR-15a expression (R<sup>2</sup> = 0.658). For classification, the Random Forest classifier demonstrated exceptional performance, achieving an AUC of 1.0, a precision of 1.0, a recall of 0.9, and an F1-score of 0.95. Hierarchical clustering effectively segregated tumors into aggressive and indolent phenotypes, consistent with clinical expectations. Radiogenomic analysis using machine learning provides a robust, non-invasive approach to predicting miR-15a expression, enabling enhanced tumor stratification and personalized RCC management. These findings underscore the clinical utility of integrating radiological and molecular data, paving the way for broader adoption of precision medicine in oncology.
Page 245 of 6576562 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.