Sort by:
Page 144 of 3453445 results

A comparative three-dimensional analysis of skeletal and dental changes induced by Herbst and PowerScope appliances in Class II malocclusion treatment: a retrospective cohort study.

Caleme E, Moro A, Mattos C, Miguel J, Batista K, Claret J, Leroux G, Cevidanes L

pubmed logopapersJul 3 2025
Skeletal Class II malocclusion is commonly treated using mandibular advancement appliances during growth. Evaluating the comparative effectiveness of different appliances can help optimize treatment outcomes. This study aimed to compare dental and skeletal outcomes of Class II malocclusion treatment using Herbst and PowerScope appliances in conjunction with fixed orthodontic therapy. This retrospective comparative study included 46 consecutively treated patients in two university clinics: 26 with PowerScope and 20 with Herbst MiniScope. CBCT scans were obtained before and after treatment. Skeletal and dental changes were analyzed using maxillary and mandibular voxel-based regional superimpositions and cranial base registrations, aided by AI-based landmark detection. Measurement bias was minimized through the use of a calibrated, blinded examiner. No patients were excluded from the analysis. Due to the study's retrospective nature, no prospective registration was performed; the institutional review board granted ethical approval. The Herbst group showed greater anterior displacement at B-point and Pogonion than PowerScope (2.4 mm and 2.6 mm, respectively). Both groups exhibited improved maxillomandibular relationships, with PowerScope's SNA angle reduced and Herbst's SNB increased. Vertical skeletal changes were observed at points A, B, and Pog in both groups. Herbst also resulted in less lower incisor proclination and more pronounced distal movement of upper incisors. Both appliances effectively corrected Class II malocclusion. Herbst promoted more pronounced skeletal advancement, while PowerScope induced greater dental compensation. These findings may be generalizable to similarly aged Class II patients in CVM stages 3-4.

Joint Shape Reconstruction and Registration via a Shared Hybrid Diffeomorphic Flow.

Shi H, Wang P, Zhang S, Zhao X, Yang B, Zhang C

pubmed logopapersJul 3 2025
Deep implicit functions (DIFs) effectively represent shapes by using a neural network to map 3D spatial coordinates to scalar values that encode the shape's geometry, but it is difficult to establish correspondences between shapes directly, limiting their use in medical image registration. The recently presented deformation field-based methods achieve implicit templates learning via template field learning with DIFs and deformation field learning, establishing shape correspondence through deformation fields. Although these approaches enable joint learning of shape representation and shape correspondence, the decoupled optimization for template field and deformation field, caused by the absence of deformation annotations lead to a relatively accurate template field but an underoptimized deformation field. In this paper, we propose a novel implicit template learning framework via a shared hybrid diffeomorphic flow (SHDF), which enables shared optimization for deformation and template, contributing to better deformations and shape representation. Specifically, we formulate the signed distance function (SDF, a type of DIFs) as a one-dimensional (1D) integral, unifying dimensions to match the form used in solving ordinary differential equation (ODE) for deformation field learning. Then, SDF in 1D integral form is integrated seamlessly into the deformation field learning. Using a recurrent learning strategy, we frame shape representations and deformations as solving different initial value problems of the same ODE. We also introduce a global smoothness regularization to handle local optima due to limited outside-of-shape data. Experiments on medical datasets show that SHDF outperforms state-of-the-art methods in shape representation and registration.

Fat-water MRI separation using deep complex convolution network.

Ganeshkumar M, Kandasamy D, Sharma R, Mehndiratta A

pubmed logopapersJul 3 2025
Deep complex convolutional networks (DCCNs) utilize complex-valued convolutions and can process complex-valued MRI signals directly without splitting them into two real-valued magnitude and phase components. The performance of DCCN and real-valued U-Net is thoroughly investigated in the physics-informed subject-specific ad-hoc reconstruction method for fat-water separation and is compared against a widely used reference approach. A comprehensive test dataset (n = 33) was used for performance analysis. The 2012 ISMRM fat-water separation workshop dataset containing 28 batches of multi-echo MRIs with 3-15 echoes from the abdomen, thigh, knee, and phantoms, acquired with 1.5 T and 3 T scanners were used. Additionally, five MAFLD patients multi-echo MRIs acquired from our clinical radiology department were also used. The quantitative results demonstrated that DCCN produced fat-water maps with better normalized RMS error and structural similarity index with the reference approach, compared to real-valued U-Nets in the ad-hoc reconstruction method for fat-water separation. The DCCN achieved an overall average SSIM of 0.847 ± 0.069 and 0.861 ± 0.078 in generating fat and water maps, respectively, in contrast the U-Net achieved only 0.653 ± 0.166 and 0.729 ± 0.134. The average liver PDFF from DCCN achieved a correlation coefficient R of 0.847 with the reference approach.

Clinical obstacles to machine-learning POCUS adoption and system-wide AI implementation (The COMPASS-AI survey).

Wong A, Roslan NL, McDonald R, Noor J, Hutchings S, D'Costa P, Via G, Corradi F

pubmed logopapersJul 3 2025
Point-of-care ultrasound (POCUS) has become indispensable in various medical specialties. The integration of artificial intelligence (AI) and machine learning (ML) holds significant promise to enhance POCUS capabilities further. However, a comprehensive understanding of healthcare professionals' perspectives on this integration is lacking. This study aimed to investigate the global perceptions, familiarity, and adoption of AI in POCUS among healthcare professionals. An international, web-based survey was conducted among healthcare professionals involved in POCUS. The survey instrument included sections on demographics, familiarity with AI, perceived utility, barriers (technological, training, trust, workflow, legal/ethical), and overall perceptions regarding AI-assisted POCUS. The data was analysed by descriptive statistics, frequency distributions, and group comparisons (using chi-square/Fisher's exact test and t-test/Mann-Whitney U test). This study surveyed 1154 healthcare professionals on perceived barriers to implementing AI in point-of-care ultrasound. Despite general enthusiasm, with 81.1% of respondents expressing agreement or strong agreement, significant barriers were identified. The most frequently cited single greatest barriers were Training & Education (27.1%) and Clinical Validation & Evidence (17.5%). Analysis also revealed that perceptions of specific barriers vary significantly based on demographic factors, including region of practice, medical specialty, and years of healthcare experience. This novel global survey provides critical insights into the perceptions and adoption of AI in POCUS. Findings highlight considerable enthusiasm alongside crucial challenges, primarily concerning training, validation, guidelines, and support. Addressing these barriers is essential for the responsible and effective implementation of AI in POCUS.

Development of a deep learning-based automated diagnostic system (DLADS) for classifying mammographic lesions - a first large-scale multi-institutional clinical trial in Japan.

Yamaguchi T, Koyama Y, Inoue K, Ban K, Hirokaga K, Kujiraoka Y, Okanami Y, Shinohara N, Tsunoda H, Uematsu T, Mukai H

pubmed logopapersJul 3 2025
Recently, western countries have built evidence on mammographic artificial Intelligence-computer-aided diagnosis (AI-CADx) systems; however, their effectiveness has not yet been sufficiently validated in Japanese women. In this study, we aimed to establish a Japanese mammographic AI-CADx system for the first time. We retrospectively collected screening or diagnostic mammograms from 63 institutions in Japan. We then randomly divided the images into training, validation, and test datasets in a balanced ratio of 8:1:1 on a case-level basis. The gold standard of annotation for the AI-CADx system is mammographic findings based on pathologic references. The AI-CADx system was developed using SE-ResNet modules and a sliding window algorithm. A cut-off concentration gradient of the heatmap image was set at 15%. The AI-CADx system was considered accurate if it detected the presence of a malignant lesion in a breast cancer mammogram. The primary endpoint of the AI-CADx system was defined as a sensitivity and specificity of over 80% for breast cancer diagnosis in the test dataset. We collected 20,638 mammograms from 11,450 Japanese women with a median age of 55 years. The mammograms included 5019 breast cancer (24.3%), 5026 benign (24.4%), and 10,593 normal (51.3%) mammograms. In the test dataset of 2059 mammograms, the AI-CADx system achieved a sensitivity of 83.5% and a specificity of 84.7% for breast cancer diagnosis. The AUC in the test dataset was 0.841 (DeLong 95% CI; 0.822-0.859). The Accuracy was almost consistent independent of breast density, mammographic findings, type of cancer, and mammography vendors (AUC (range); 0.639-0.906). The developed Japanese mammographic AI-CADx system diagnosed breast cancer with a pre-specified sensitivity and specificity. We are planning a prospective study to validate the breast cancer diagnostic performance of Japanese physicians using this AI-CADx system as a second reader. UMIN, trial number UMIN000039009. Registered 26 December 2019, https://www.umin.ac.jp/ctr/.

Radiological and Biological Dictionary of Radiomics Features: Addressing Understandable AI Issues in Personalized Prostate Cancer, Dictionary Version PM1.0.

Salmanpour MR, Amiri S, Gharibi S, Shariftabrizi A, Xu Y, Weeks WB, Rahmim A, Hacihaliloglu I

pubmed logopapersJul 3 2025
Artificial intelligence (AI) can advance medical diagnostics, but interpretability limits its clinical use. This work links standardized quantitative Radiomics features (RF) extracted from medical images with clinical frameworks like PI-RADS, ensuring AI models are understandable and aligned with clinical practice. We investigate the connection between visual semantic features defined in PI-RADS and associated risk factors, moving beyond abnormal imaging findings, and establishing a shared framework between medical and AI professionals by creating a standardized radiological/biological RF dictionary. Six interpretable and seven complex classifiers, combined with nine interpretable feature selection algorithms (FSA), were applied to RFs extracted from segmented lesions in T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), and apparent diffusion coefficient (ADC) multiparametric MRI sequences to predict TCIA-UCLA scores, grouped as low-risk (scores 1-3) and high-risk (scores 4-5). We then utilized the created dictionary to interpret the best predictive models. Combining sequences with FSAs including ANOVA F-test, Correlation Coefficient, and Fisher Score, and utilizing logistic regression, identified key features: The 90th percentile from T2WI, (reflecting hypo-intensity related to prostate cancer risk; Variance from T2WI (lesion heterogeneity; shape metrics including Least Axis Length and Surface Area to Volume ratio from ADC, describing lesion shape and compactness; and Run Entropy from ADC (texture consistency). This approach achieved the highest average accuracy of 0.78 ± 0.01, significantly outperforming single-sequence methods (p-value < 0.05). The developed dictionary for Prostate-MRI (PM1.0) serves as a common language and fosters collaboration between clinical professionals and AI developers to advance trustworthy AI solutions that support reliable/interpretable clinical decisions.

Development of a prediction model by combining tumor diameter and clinical parameters of adrenal incidentaloma.

Iwamoto Y, Kimura T, Morimoto Y, Sugisaki T, Dan K, Iwamoto H, Sanada J, Fushimi Y, Shimoda M, Fujii T, Nakanishi S, Mune T, Kaku K, Kaneto H

pubmed logopapersJul 3 2025
When adrenal incidentalomas are detected, diagnostic procedures are complicated by the need for endocrine-stimulating tests and imaging using various modalities to evaluate whether the tumor is a hormone-producing adrenal tumor. This study aimed to develop a machine-learning-based clinical model that combines computed tomography (CT) imaging and clinical parameters for adrenal tumor classification. This was a retrospective cohort study involving 162 patients who underwent hormone testing for adrenal incidentalomas at our institution. Nominal logistic regression analysis was used to identify the predictive factors for hormone-producing adrenal tumors, and three random forest classification models were developed using clinical and imaging parameters. The study included 55 patients with non-functioning adrenal tumors (NFAT), 44 with primary aldosteronism (PA), 22 with mild autonomous cortisol secretion (MACS), 18 with Cushing's syndrome (CS), and 23 with pheochromocytoma (Pheo). A random forest classification model combining the adrenal tumor diameter on CT, early morning hormone measurements, and several clinical parameters was constructed, and showed high diagnostic accuracy for PA, Pheo, and CS (area under the curve: 0.88, 0.85, and 0.80, respectively). However, sufficient diagnostic accuracy has not yet been achieved for MACS. This model provides a noninvasive and efficient tool for adrenal tumor classification, potentially reducing the need for additional hormonal stimulation tests. However, further validation studies are required to confirm the clinical utility of this method.

Deep neural hashing for content-based medical image retrieval: A survey.

Manna A, Sista R, Sheet D

pubmed logopapersJul 3 2025
The ever-growing digital repositories of medical data provide opportunities for advanced healthcare by forming a foundation for a digital healthcare ecosystem. Such an ecosystem facilitates digitized solutions to aspects like early diagnosis, evidence-based treatments, precision medicine, etc. Content-based medical image retrieval (CBMIR) plays a pivotal role in delivering advanced diagnostic healthcare within such an ecosystem. The concept of deep neural hashing (DNH) is introduced with CBMIR systems to aid in faster and more relevant retrievals from such large repositories. The fusion of DNH with CBMIR is an interesting and blooming area whose potential, impact, and methods have not been summarized so far. This survey attempts to summarize this blooming area through an in-depth exploration of the methods of DNH for CBMIR. This survey portrays an end-to-end pipeline for DNH within a CBMIR system. As part of this, concepts like the design of the DNH network, utilizing diverse learning strategies, different loss functions, and evaluation metrics for retrieval performance are discussed in detail. The learning strategies for DNH are further explored by categorizing them based on the loss function into pointwise, pairwise, and triplet-wise. Centered around this categorization, various existing methods are discussed in-depth, mainly focusing on the key contributing aspects of each method. Finally, the future vision for this field is shared in detail by emphasizing three key aspects: current and immediate areas of research, realizing the current and near-future research into practical applications, and finally, some unexplored research topics for the future. In summary, this survey depicts the current state of research and the future vision of the field of CBMIR systems with DNH.

Radiology report generation using automatic keyword adaptation, frequency-based multi-label classification and text-to-text large language models.

He Z, Wong ANN, Yoo JS

pubmed logopapersJul 3 2025
Radiology reports are essential in medical imaging, providing critical insights for diagnosis, treatment, and patient management by bridging the gap between radiologists and referring physicians. However, the manual generation of radiology reports is time-consuming and labor-intensive, leading to inefficiencies and delays in clinical workflows, particularly as case volumes increase. Although deep learning approaches have shown promise in automating radiology report generation, existing methods, particularly those based on the encoder-decoder framework, suffer from significant limitations. These include a lack of explainability due to black-box features generated by encoder and limited adaptability to diverse clinical settings. In this study, we address these challenges by proposing a novel deep learning framework for radiology report generation that enhances explainability, accuracy, and adaptability. Our approach replaces traditional black-box features in computer vision with transparent keyword lists, improving the interpretability of the feature extraction process. To generate these keyword lists, we apply a multi-label classification technique, which is further enhanced by an automatic keyword adaptation mechanism. This adaptation dynamically configures the multi-label classification to better adapt specific clinical environments, reducing the reliance on manually curated reference keyword lists and improving model adaptability across diverse datasets. We also introduce a frequency-based multi-label classification strategy to address the issue of keyword imbalance, ensuring that rare but clinically significant terms are accurately identified. Finally, we leverage a pre-trained text-to-text large language model (LLM) to generate human-like, clinically relevant radiology reports from the extracted keyword lists, ensuring linguistic quality and clinical coherence. We evaluate our method using two public datasets, IU-XRay and MIMIC-CXR, demonstrating superior performance over state-of-the-art methods. Our framework not only improves the accuracy and reliability of radiology report generation but also enhances the explainability of the process, fostering greater trust and adoption of AI-driven solutions in clinical practice. Comprehensive ablation studies confirm the robustness and effectiveness of each component, highlighting the significant contributions of our framework to advancing automated radiology reporting. In conclusion, we developed a novel deep-learning based radiology report generation method for preparing high-quality and explainable radiology report for chest X-ray images using the multi-label classification and a text-to-text large language model. Our method could address the lack of explainability in the current workflow and provide a clear and flexible automated pipeline to reduce the workload of radiologists and support the further applications related to Human-AI interactive communications.

Transformer attention-based neural network for cognitive score estimation from sMRI data.

Li S, Zhang Y, Zou C, Zhang L, Li F, Liu Q

pubmed logopapersJul 3 2025
Accurately predicting cognitive scores based on structural MRI holds significant clinical value for understanding the pathological stages of dementia and forecasting Alzheimer's disease (AD). Some existing deep learning methods often depend on anatomical priors, overlooking individual-specific structural differences during AD progression. To address these limitations, this work proposes a deep neural network that incorporates Transformer attention to jointly predict multiple cognitive scores, including ADAS, CDRSB, and MMSE. The architecture first employs a 3D convolutional neural network backbone to encode sMRI, capturing preliminary local structural information. Then an improved Transformer attention block integrated with 3D positional encoding and 3D convolutional layer to adaptively capture discriminative imaging features across the brain, thereby focusing on key cognitive-related regions effectively. Finally, an attention-aware regression network enables the joint prediction of multiple clinical scores. Experimental results demonstrate that our method outperforms some existing traditional and deep learning methods based on the ADNI dataset. Further qualitative analysis reveals that the dementia-related brain regions identified by the model hold important biological significance, effectively enhancing the performance of cognitive score prediction. Our code is publicly available at: https://github.com/lshsx/CTA_MRI.
Page 144 of 3453445 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.