Sort by:
Page 142 of 3993982 results

A comparative three-dimensional analysis of skeletal and dental changes induced by Herbst and PowerScope appliances in Class II malocclusion treatment: a retrospective cohort study.

Caleme E, Moro A, Mattos C, Miguel J, Batista K, Claret J, Leroux G, Cevidanes L

pubmed logopapersJul 3 2025
Skeletal Class II malocclusion is commonly treated using mandibular advancement appliances during growth. Evaluating the comparative effectiveness of different appliances can help optimize treatment outcomes. This study aimed to compare dental and skeletal outcomes of Class II malocclusion treatment using Herbst and PowerScope appliances in conjunction with fixed orthodontic therapy. This retrospective comparative study included 46 consecutively treated patients in two university clinics: 26 with PowerScope and 20 with Herbst MiniScope. CBCT scans were obtained before and after treatment. Skeletal and dental changes were analyzed using maxillary and mandibular voxel-based regional superimpositions and cranial base registrations, aided by AI-based landmark detection. Measurement bias was minimized through the use of a calibrated, blinded examiner. No patients were excluded from the analysis. Due to the study's retrospective nature, no prospective registration was performed; the institutional review board granted ethical approval. The Herbst group showed greater anterior displacement at B-point and Pogonion than PowerScope (2.4 mm and 2.6 mm, respectively). Both groups exhibited improved maxillomandibular relationships, with PowerScope's SNA angle reduced and Herbst's SNB increased. Vertical skeletal changes were observed at points A, B, and Pog in both groups. Herbst also resulted in less lower incisor proclination and more pronounced distal movement of upper incisors. Both appliances effectively corrected Class II malocclusion. Herbst promoted more pronounced skeletal advancement, while PowerScope induced greater dental compensation. These findings may be generalizable to similarly aged Class II patients in CVM stages 3-4.

Joint Shape Reconstruction and Registration via a Shared Hybrid Diffeomorphic Flow.

Shi H, Wang P, Zhang S, Zhao X, Yang B, Zhang C

pubmed logopapersJul 3 2025
Deep implicit functions (DIFs) effectively represent shapes by using a neural network to map 3D spatial coordinates to scalar values that encode the shape's geometry, but it is difficult to establish correspondences between shapes directly, limiting their use in medical image registration. The recently presented deformation field-based methods achieve implicit templates learning via template field learning with DIFs and deformation field learning, establishing shape correspondence through deformation fields. Although these approaches enable joint learning of shape representation and shape correspondence, the decoupled optimization for template field and deformation field, caused by the absence of deformation annotations lead to a relatively accurate template field but an underoptimized deformation field. In this paper, we propose a novel implicit template learning framework via a shared hybrid diffeomorphic flow (SHDF), which enables shared optimization for deformation and template, contributing to better deformations and shape representation. Specifically, we formulate the signed distance function (SDF, a type of DIFs) as a one-dimensional (1D) integral, unifying dimensions to match the form used in solving ordinary differential equation (ODE) for deformation field learning. Then, SDF in 1D integral form is integrated seamlessly into the deformation field learning. Using a recurrent learning strategy, we frame shape representations and deformations as solving different initial value problems of the same ODE. We also introduce a global smoothness regularization to handle local optima due to limited outside-of-shape data. Experiments on medical datasets show that SHDF outperforms state-of-the-art methods in shape representation and registration.

Fat-water MRI separation using deep complex convolution network.

Ganeshkumar M, Kandasamy D, Sharma R, Mehndiratta A

pubmed logopapersJul 3 2025
Deep complex convolutional networks (DCCNs) utilize complex-valued convolutions and can process complex-valued MRI signals directly without splitting them into two real-valued magnitude and phase components. The performance of DCCN and real-valued U-Net is thoroughly investigated in the physics-informed subject-specific ad-hoc reconstruction method for fat-water separation and is compared against a widely used reference approach. A comprehensive test dataset (n = 33) was used for performance analysis. The 2012 ISMRM fat-water separation workshop dataset containing 28 batches of multi-echo MRIs with 3-15 echoes from the abdomen, thigh, knee, and phantoms, acquired with 1.5 T and 3 T scanners were used. Additionally, five MAFLD patients multi-echo MRIs acquired from our clinical radiology department were also used. The quantitative results demonstrated that DCCN produced fat-water maps with better normalized RMS error and structural similarity index with the reference approach, compared to real-valued U-Nets in the ad-hoc reconstruction method for fat-water separation. The DCCN achieved an overall average SSIM of 0.847 ± 0.069 and 0.861 ± 0.078 in generating fat and water maps, respectively, in contrast the U-Net achieved only 0.653 ± 0.166 and 0.729 ± 0.134. The average liver PDFF from DCCN achieved a correlation coefficient R of 0.847 with the reference approach.

Clinical obstacles to machine-learning POCUS adoption and system-wide AI implementation (The COMPASS-AI survey).

Wong A, Roslan NL, McDonald R, Noor J, Hutchings S, D'Costa P, Via G, Corradi F

pubmed logopapersJul 3 2025
Point-of-care ultrasound (POCUS) has become indispensable in various medical specialties. The integration of artificial intelligence (AI) and machine learning (ML) holds significant promise to enhance POCUS capabilities further. However, a comprehensive understanding of healthcare professionals' perspectives on this integration is lacking. This study aimed to investigate the global perceptions, familiarity, and adoption of AI in POCUS among healthcare professionals. An international, web-based survey was conducted among healthcare professionals involved in POCUS. The survey instrument included sections on demographics, familiarity with AI, perceived utility, barriers (technological, training, trust, workflow, legal/ethical), and overall perceptions regarding AI-assisted POCUS. The data was analysed by descriptive statistics, frequency distributions, and group comparisons (using chi-square/Fisher's exact test and t-test/Mann-Whitney U test). This study surveyed 1154 healthcare professionals on perceived barriers to implementing AI in point-of-care ultrasound. Despite general enthusiasm, with 81.1% of respondents expressing agreement or strong agreement, significant barriers were identified. The most frequently cited single greatest barriers were Training & Education (27.1%) and Clinical Validation & Evidence (17.5%). Analysis also revealed that perceptions of specific barriers vary significantly based on demographic factors, including region of practice, medical specialty, and years of healthcare experience. This novel global survey provides critical insights into the perceptions and adoption of AI in POCUS. Findings highlight considerable enthusiasm alongside crucial challenges, primarily concerning training, validation, guidelines, and support. Addressing these barriers is essential for the responsible and effective implementation of AI in POCUS.

Development of a deep learning-based automated diagnostic system (DLADS) for classifying mammographic lesions - a first large-scale multi-institutional clinical trial in Japan.

Yamaguchi T, Koyama Y, Inoue K, Ban K, Hirokaga K, Kujiraoka Y, Okanami Y, Shinohara N, Tsunoda H, Uematsu T, Mukai H

pubmed logopapersJul 3 2025
Recently, western countries have built evidence on mammographic artificial Intelligence-computer-aided diagnosis (AI-CADx) systems; however, their effectiveness has not yet been sufficiently validated in Japanese women. In this study, we aimed to establish a Japanese mammographic AI-CADx system for the first time. We retrospectively collected screening or diagnostic mammograms from 63 institutions in Japan. We then randomly divided the images into training, validation, and test datasets in a balanced ratio of 8:1:1 on a case-level basis. The gold standard of annotation for the AI-CADx system is mammographic findings based on pathologic references. The AI-CADx system was developed using SE-ResNet modules and a sliding window algorithm. A cut-off concentration gradient of the heatmap image was set at 15%. The AI-CADx system was considered accurate if it detected the presence of a malignant lesion in a breast cancer mammogram. The primary endpoint of the AI-CADx system was defined as a sensitivity and specificity of over 80% for breast cancer diagnosis in the test dataset. We collected 20,638 mammograms from 11,450 Japanese women with a median age of 55 years. The mammograms included 5019 breast cancer (24.3%), 5026 benign (24.4%), and 10,593 normal (51.3%) mammograms. In the test dataset of 2059 mammograms, the AI-CADx system achieved a sensitivity of 83.5% and a specificity of 84.7% for breast cancer diagnosis. The AUC in the test dataset was 0.841 (DeLong 95% CI; 0.822-0.859). The Accuracy was almost consistent independent of breast density, mammographic findings, type of cancer, and mammography vendors (AUC (range); 0.639-0.906). The developed Japanese mammographic AI-CADx system diagnosed breast cancer with a pre-specified sensitivity and specificity. We are planning a prospective study to validate the breast cancer diagnostic performance of Japanese physicians using this AI-CADx system as a second reader. UMIN, trial number UMIN000039009. Registered 26 December 2019, https://www.umin.ac.jp/ctr/.

Radiological and Biological Dictionary of Radiomics Features: Addressing Understandable AI Issues in Personalized Prostate Cancer, Dictionary Version PM1.0.

Salmanpour MR, Amiri S, Gharibi S, Shariftabrizi A, Xu Y, Weeks WB, Rahmim A, Hacihaliloglu I

pubmed logopapersJul 3 2025
Artificial intelligence (AI) can advance medical diagnostics, but interpretability limits its clinical use. This work links standardized quantitative Radiomics features (RF) extracted from medical images with clinical frameworks like PI-RADS, ensuring AI models are understandable and aligned with clinical practice. We investigate the connection between visual semantic features defined in PI-RADS and associated risk factors, moving beyond abnormal imaging findings, and establishing a shared framework between medical and AI professionals by creating a standardized radiological/biological RF dictionary. Six interpretable and seven complex classifiers, combined with nine interpretable feature selection algorithms (FSA), were applied to RFs extracted from segmented lesions in T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), and apparent diffusion coefficient (ADC) multiparametric MRI sequences to predict TCIA-UCLA scores, grouped as low-risk (scores 1-3) and high-risk (scores 4-5). We then utilized the created dictionary to interpret the best predictive models. Combining sequences with FSAs including ANOVA F-test, Correlation Coefficient, and Fisher Score, and utilizing logistic regression, identified key features: The 90th percentile from T2WI, (reflecting hypo-intensity related to prostate cancer risk; Variance from T2WI (lesion heterogeneity; shape metrics including Least Axis Length and Surface Area to Volume ratio from ADC, describing lesion shape and compactness; and Run Entropy from ADC (texture consistency). This approach achieved the highest average accuracy of 0.78 ± 0.01, significantly outperforming single-sequence methods (p-value < 0.05). The developed dictionary for Prostate-MRI (PM1.0) serves as a common language and fosters collaboration between clinical professionals and AI developers to advance trustworthy AI solutions that support reliable/interpretable clinical decisions.

Development of a prediction model by combining tumor diameter and clinical parameters of adrenal incidentaloma.

Iwamoto Y, Kimura T, Morimoto Y, Sugisaki T, Dan K, Iwamoto H, Sanada J, Fushimi Y, Shimoda M, Fujii T, Nakanishi S, Mune T, Kaku K, Kaneto H

pubmed logopapersJul 3 2025
When adrenal incidentalomas are detected, diagnostic procedures are complicated by the need for endocrine-stimulating tests and imaging using various modalities to evaluate whether the tumor is a hormone-producing adrenal tumor. This study aimed to develop a machine-learning-based clinical model that combines computed tomography (CT) imaging and clinical parameters for adrenal tumor classification. This was a retrospective cohort study involving 162 patients who underwent hormone testing for adrenal incidentalomas at our institution. Nominal logistic regression analysis was used to identify the predictive factors for hormone-producing adrenal tumors, and three random forest classification models were developed using clinical and imaging parameters. The study included 55 patients with non-functioning adrenal tumors (NFAT), 44 with primary aldosteronism (PA), 22 with mild autonomous cortisol secretion (MACS), 18 with Cushing's syndrome (CS), and 23 with pheochromocytoma (Pheo). A random forest classification model combining the adrenal tumor diameter on CT, early morning hormone measurements, and several clinical parameters was constructed, and showed high diagnostic accuracy for PA, Pheo, and CS (area under the curve: 0.88, 0.85, and 0.80, respectively). However, sufficient diagnostic accuracy has not yet been achieved for MACS. This model provides a noninvasive and efficient tool for adrenal tumor classification, potentially reducing the need for additional hormonal stimulation tests. However, further validation studies are required to confirm the clinical utility of this method.

MvHo-IB: Multi-View Higher-Order Information Bottleneck for Brain Disorder Diagnosis

Kunyu Zhang, Qiang Li, Shujian Yu

arxiv logopreprintJul 3 2025
Recent evidence suggests that modeling higher-order interactions (HOIs) in functional magnetic resonance imaging (fMRI) data can enhance the diagnostic accuracy of machine learning systems. However, effectively extracting and utilizing HOIs remains a significant challenge. In this work, we propose MvHo-IB, a novel multi-view learning framework that integrates both pairwise interactions and HOIs for diagnostic decision-making, while automatically compressing task-irrelevant redundant information. MvHo-IB introduces several key innovations: (1) a principled method that combines O-information from information theory with a matrix-based Renyi alpha-order entropy estimator to quantify and extract HOIs, (2) a purpose-built Brain3DCNN encoder to effectively utilize these interactions, and (3) a new multi-view learning information bottleneck objective to enhance representation learning. Experiments on three benchmark fMRI datasets demonstrate that MvHo-IB achieves state-of-the-art performance, significantly outperforming previous methods, including recent hypergraph-based techniques. The implementation of MvHo-IB is available at https://github.com/zky04/MvHo-IB.

Embedding-Based Federated Data Sharing via Differentially Private Conditional VAEs

Francesco Di Salvo, Hanh Huyen My Nguyen, Christian Ledig

arxiv logopreprintJul 3 2025
Deep Learning (DL) has revolutionized medical imaging, yet its adoption is constrained by data scarcity and privacy regulations, limiting access to diverse datasets. Federated Learning (FL) enables decentralized training but suffers from high communication costs and is often restricted to a single downstream task, reducing flexibility. We propose a data-sharing method via Differentially Private (DP) generative models. By adopting foundation models, we extract compact, informative embeddings, reducing redundancy and lowering computational overhead. Clients collaboratively train a Differentially Private Conditional Variational Autoencoder (DP-CVAE) to model a global, privacy-aware data distribution, supporting diverse downstream tasks. Our approach, validated across multiple feature extractors, enhances privacy, scalability, and efficiency, outperforming traditional FL classifiers while ensuring differential privacy. Additionally, DP-CVAE produces higher-fidelity embeddings than DP-CGAN while requiring $5{\times}$ fewer parameters.

3D Heart Reconstruction from Sparse Pose-agnostic 2D Echocardiographic Slices

Zhurong Chen, Jinhua Chen, Wei Zhuo, Wufeng Xue, Dong Ni

arxiv logopreprintJul 3 2025
Echocardiography (echo) plays an indispensable role in the clinical practice of heart diseases. However, ultrasound imaging typically provides only two-dimensional (2D) cross-sectional images from a few specific views, making it challenging to interpret and inaccurate for estimation of clinical parameters like the volume of left ventricle (LV). 3D ultrasound imaging provides an alternative for 3D quantification, but is still limited by the low spatial and temporal resolution and the highly demanding manual delineation. To address these challenges, we propose an innovative framework for reconstructing personalized 3D heart anatomy from 2D echo slices that are frequently used in clinical practice. Specifically, a novel 3D reconstruction pipeline is designed, which alternatively optimizes between the 3D pose estimation of these 2D slices and the 3D integration of these slices using an implicit neural network, progressively transforming a prior 3D heart shape into a personalized 3D heart model. We validate the method with two datasets. When six planes are used, the reconstructed 3D heart can lead to a significant improvement for LV volume estimation over the bi-plane method (error in percent: 1.98\% VS. 20.24\%). In addition, the whole reconstruction framework makes even an important breakthrough that can estimate RV volume from 2D echo slices (with an error of 5.75\% ). This study provides a new way for personalized 3D structure and function analysis from cardiac ultrasound and is of great potential in clinical practice.
Page 142 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.