Sort by:
Page 14 of 4064055 results

Automatic segmentation of cone beam CT images using treatment planning CT images in patients with prostate cancer.

Takayama Y, Kadoya N, Yamamoto T, Miyasaka Y, Kusano Y, Kajikawa T, Tomori S, Katsuta Y, Tanaka S, Arai K, Takeda K, Jingu K

pubmed logopapersAug 14 2025
Cone-beam computed tomography-based online adaptive radiotherapy (CBCT-based online ART) is currently used in clinical practice; however, deep learning-based segmentation of CBCT images remains challenging. Previous studies generated CBCT datasets for segmentation by adding contours outside clinical practice or synthesizing tissue contrast-enhanced diagnostic images paired with CBCT images. This study aimed to improve CBCT segmentation by matching the treatment planning CT (tpCT) image quality to CBCT images without altering the tpCT image or its contours. A deep-learning-based CBCT segmentation model was trained for the male pelvis using only the tpCT dataset. To bridge the quality gap between tpCT and routine CBCT images, an artificial pseudo-CBCT dataset was generated using Gaussian noise and Fourier domain adaptation (FDA) for 80 tpCT datasets (the hybrid FDA method). A five-fold cross-validation approach was used for model training. For comparison, atlas-based segmentation was performed with a registered tpCT dataset. The Dice similarity coefficient (DSC) assessed contour quality between the model-predicted and reference manual contours. The average DSC values for the clinical target volume, bladder, and rectum using the hybrid FDA method were 0.71 ± 0.08, 0.84 ± 0.08, and 0.78 ± 0.06, respectively. Conversely, the values for the model using plain tpCT were 0.40 ± 0.12, 0.17 ± 0.21, and 0.18 ± 0.14, and for the atlas-based model were 0.66 ± 0.13, 0.59 ± 0.16, and 0.66 ± 0.11, respectively. The segmentation model using the hybrid FDA method demonstrated significantly higher accuracy than models trained on plain tpCT datasets and those using atlas-based segmentation.

Data-Driven Abdominal Phenotypes of Type 2 Diabetes in Lean, Overweight, and Obese Cohorts

Lucas W. Remedios, Chloe Choe, Trent M. Schwartz, Dingjie Su, Gaurav Rudravaram, Chenyu Gao, Aravind R. Krishnan, Adam M. Saunders, Michael E. Kim, Shunxing Bao, Alvin C. Powers, Bennett A. Landman, John Virostko

arxiv logopreprintAug 14 2025
Purpose: Although elevated BMI is a well-known risk factor for type 2 diabetes, the disease's presence in some lean adults and absence in others with obesity suggests that detailed body composition may uncover abdominal phenotypes of type 2 diabetes. With AI, we can now extract detailed measurements of size, shape, and fat content from abdominal structures in 3D clinical imaging at scale. This creates an opportunity to empirically define body composition signatures linked to type 2 diabetes risk and protection using large-scale clinical data. Approach: To uncover BMI-specific diabetic abdominal patterns from clinical CT, we applied our design four times: once on the full cohort (n = 1,728) and once on lean (n = 497), overweight (n = 611), and obese (n = 620) subgroups separately. Briefly, our experimental design transforms abdominal scans into collections of explainable measurements through segmentation, classifies type 2 diabetes through a cross-validated random forest, measures how features contribute to model-estimated risk or protection through SHAP analysis, groups scans by shared model decision patterns (clustering from SHAP) and links back to anatomical differences (classification). Results: The random-forests achieved mean AUCs of 0.72-0.74. There were shared type 2 diabetes signatures in each group; fatty skeletal muscle, older age, greater visceral and subcutaneous fat, and a smaller or fat-laden pancreas. Univariate logistic regression confirmed the direction of 14-18 of the top 20 predictors within each subgroup (p < 0.05). Conclusions: Our findings suggest that abdominal drivers of type 2 diabetes may be consistent across weight classes.

Deep Learning-Based Automated Segmentation of Uterine Myomas

Tausifa Jan Saleem, Mohammad Yaqub

arxiv logopreprintAug 14 2025
Uterine fibroids (myomas) are the most common benign tumors of the female reproductive system, particularly among women of childbearing age. With a prevalence exceeding 70%, they pose a significant burden on female reproductive health. Clinical symptoms such as abnormal uterine bleeding, infertility, pelvic pain, and pressure-related discomfort play a crucial role in guiding treatment decisions, which are largely influenced by the size, number, and anatomical location of the fibroids. Magnetic Resonance Imaging (MRI) is a non-invasive and highly accurate imaging modality commonly used by clinicians for the diagnosis of uterine fibroids. Segmenting uterine fibroids requires a precise assessment of both the uterus and fibroids on MRI scans, including measurements of volume, shape, and spatial location. However, this process is labor intensive and time consuming and subjected to variability due to intra- and inter-expert differences at both pre- and post-treatment stages. As a result, there is a critical need for an accurate and automated segmentation method for uterine fibroids. In recent years, deep learning algorithms have shown re-markable improvements in medical image segmentation, outperforming traditional methods. These approaches offer the potential for fully automated segmentation. Several studies have explored the use of deep learning models to achieve automated segmentation of uterine fibroids. However, most of the previous work has been conducted using private datasets, which poses challenges for validation and comparison between studies. In this study, we leverage the publicly available Uterine Myoma MRI Dataset (UMD) to establish a baseline for automated segmentation of uterine fibroids, enabling standardized evaluation and facilitating future research in this domain.

Healthcare and cutting-edge technology: Advancements, challenges, and future prospects.

Singhal V, R S, Singhal S, Tiwari A, Mangal D

pubmed logopapersAug 14 2025
The high-level integration of technology in health care has radically changed the process of patient care, diagnosis, treatment, and health outcomes. This paper discusses significant technological advances: AI for medical imaging to detect early disease stages; robotic surgery with precision and minimally invasive techniques; telemedicine for remote monitoring and virtual consultation; personalized medicine through genomic analysis; and blockchain in secure and transparent handling of health data. Every section in the paper discusses the underlying principles, advantages, and disadvantages associated with such technologies, supported by appropriate case studies like deploying AI in radiology to enhance cancer diagnosis or robotic surgery to enhance accuracy in surgery and blockchain technology in electronic health records to enable data integrity and security. The paper also discusses key ethical issues, including risks to data privacy, algorithmic bias in AI-based diagnosis, patient consent problems in genomic medicine, and regulatory issues blocking the large-scale adoption of digital health solutions. The article also includes some recommended avenues of future research in the spaces where interdisciplinary cooperation, effective cybersecurity frameworks, and policy transformations are urgently required to ensure that new healthcare technology adoption is ethical and responsible. The work is aimed at delivering important information for policymakers and researchers who are interested in the changing roles of technology to improve healthcare provision and patient outcomes, as well as healthcare practitioners.

Data-driven cognitive subtypes in major depressive disorder: Gray matter atrophy in the left fusiform gyrus and cerebellum.

Tao Y, Yan Y, Wang M, Fan H, Dou Y, Zhao L, Ni R, Wei J, Yang X, Ma X

pubmed logopapersAug 14 2025
This study aims to apply a semi-supervised machine learning approach for classifying major depressive disorder (MDD) patients into more homogeneous cognitive subtypes based on multidimensional cognitive profiles, and to perform multimodal neuroimaging to identify subtype-specific neural signatures. A total of 147 MDD patients and 222 healthy controls (HCs) completed the Cambridge Neuropsychological Test Automated Battery (CANTAB) and magnetic resonance imaging (MRI) scans. Cognitive subtypes were derived based on neurocognitive profiles using heterogeneity through discriminative analysis (HYDRA). General linear models (GLMs) were employed to assess differences across groups in neurocognitive indexes and neuroimaging data followed by Tukey's post-hoc test for pairwise comparisons between the groups. Based on cognitive profiles, MDD patients were classified into cognitive deficit (CD, N = 75) and cognitive preservation (CP, N = 72) subtypes. Voxel-based morphometry (VBM) revealed reduced grey matter volume (GMV) in the left fusiform gyrus and left cerebellum in MDD patients when compared to HCs, with CD patients showing greater atrophy than patients in CP subtype. Meanwhile, the amplitude of low-frequency fluctuations (ALFF) in the temporal lobe of both MDD subtypes was decreased when compared to that of HCs, showing no inter-subtype differences. A subtype of MDD characterized by comprehensive cognitive deficits is associated with structural atrophy in the left fusiform gyrus and cerebellum, suggesting these regions as potential biomarkers for the cognitive deficit subtype of MDD. However, no significant differences in ALFF were observed between the two cognitive subgroups.

Restorative artificial intelligence-driven implant dentistry for immediate implant placement with an interim crown: A clinical report.

Marques VR, Soh D, Cerqueira G, Orgev A

pubmed logopapersAug 14 2025
Immediate implant placement into the extraction socket based on a restoratively driven approach poses challenges which might compromise the delivery of an immediate interim restoration on the day of surgery. The fabricated digital design of the interim restoration may require modification before delivery and may not maintain the planned form to support the gingival architecture for the future prosthetic volume for the emergence profile. This report demonstrates how to utilize the artificial intelligence (AI)-assisted segmentation of bone and tooth to enhance restoratively driven planning for immediate implant placement with an immediate interim restoration. A fractured maxillary central incisor was extracted after cone beam computed tomography (CBCT) analysis. AI-assisted segmentation from the digital imaging and communications in medicine (DICOM) file was used to separate the tooth segmentation and alveolar bone for the digital implant planning and AI-assisted design of the interim restoration copied from the natural tooth contour, optimizing the emergence profile. Immediate implant placement was completed after minimally traumatic extraction, and the AI-assisted interim restoration was delivered immediately. The AI-assisted workflow enabled predictable implant positioning based on restorative needs, reducing surgical time and optimizing delivery of the interim restoration for improved clinical outcomes. The emergence profile of the anatomic crown copied from the AI-workflow for the interim restoration guided soft tissue healing effectively.

Exploring the potential of generative artificial intelligence in medical image synthesis: opportunities, challenges, and future directions.

Khosravi B, Purkayastha S, Erickson BJ, Trivedi HM, Gichoya JW

pubmed logopapersAug 14 2025
Generative artificial intelligence has emerged as a transformative force in medical imaging since 2022, enabling the creation of derivative synthetic datasets that closely resemble real-world data. This Viewpoint examines key aspects of synthetic data, focusing on its advancements, applications, and challenges in medical imaging. Various generative artificial intelligence image generation paradigms, such as physics-informed and statistical models, and their potential to augment and diversify medical research resources are explored. The promises of synthetic datasets, including increased diversity, privacy preservation, and multifunctionality, are also discussed, along with their ability to model complex biological phenomena. Next, specific applications using synthetic data such as enhancing medical education, augmenting rare disease datasets, improving radiology workflows, and enabling privacy-preserving multicentre collaborations are highlighted. The challenges and ethical considerations surrounding generative artificial intelligence, including patient privacy, data copying, and potential biases that could impede clinical translation, are also addressed. Finally, future directions for research and development in this rapidly evolving field are outlined, emphasising the need for robust evaluation frameworks and responsible utilisation of generative artificial intelligence in medical imaging.

Ultrasound Phase Aberrated Point Spread Function Estimation with Convolutional Neural Network: Simulation Study.

Shen WH, Lin YA, Li ML

pubmed logopapersAug 13 2025
Ultrasound imaging systems rely on accurate point spread function (PSF) estimation to support advanced image quality enhancement techniques such as deconvolution and speckle reduction. Phase aberration, caused by sound speed inhomogeneity within biological tissue, is inevitable in ultrasound imaging. It distorts the PSF by increasing sidelobe level and introducing asymmetric amplitude, making PSF estimation under phase aberration highly challenging. In this work, we propose a deep learning framework for estimating phase-aberrated PSFs using U-Net and complex U-Net architectures, operating on RF and complex k-space data, respectively, with the latter demonstrating superior performance. Synthetic phase aberration data, generated using the near-field phase screen model, is employed to train the networks. We evaluate various loss functions and find that log-compressed B-mode perceptual loss achieves the best performance, accurately predicting both the mainlobe and near sidelobe regions of the PSF. Simulation results validate the effectiveness of our approach in estimating PSFs under varying levels of phase aberration. Furthermore, we demonstrate that more accurate PSF estimation improves performance in a downstream phase aberration correction task, highlighting the broader utility of the proposed method.

Exploring Radiologists' Use of AI Chatbots for Assistance in Image Interpretation: Patterns of Use and Trust Evaluation.

Alarifi M

pubmed logopapersAug 13 2025
This study investigated radiologists' perceptions of AI-generated, patient-friendly radiology reports across three modalities: MRI, CT, and mammogram/ultrasound. The evaluation focused on report correctness, completeness, terminology complexity, and emotional impact. Seventy-nine radiologists from four major Saudi Arabian hospitals assessed AI-simplified versions of clinical radiology reports. Each participant reviewed one report from each modality and completed a structured questionnaire covering factual correctness, completeness, terminology complexity, and emotional impact. A structured and detailed prompt was used to guide ChatGPT-4 in generating the reports, which included clear findings, a lay summary, glossary, and clarification of ambiguous elements. Statistical analyses included descriptive summaries, Friedman tests, and Pearson correlations. Radiologists rated mammogram reports highest for correctness (M = 4.22), followed by CT (4.05) and MRI (3.95). Completeness scores followed a similar trend. Statistically significant differences were found in correctness (χ<sup>2</sup>(2) = 17.37, p < 0.001) and completeness (χ<sup>2</sup>(2) = 13.13, p = 0.001). Anxiety and complexity ratings were moderate, with MRI reports linked to slightly higher concern. A weak positive correlation emerged between radiologists' experience and mammogram correctness ratings (r = .235, p = .037). Radiologists expressed overall support for AI-generated simplified radiology reports when created using a structured prompt that includes summaries, glossaries, and clarification of ambiguous findings. While mammography and CT reports were rated favorably, MRI reports showed higher emotional impact, highlighting a need for clearer and more emotionally supportive language.

Economic Evaluations and Equity in the Use of Artificial Intelligence in Imaging Examinations for Medical Diagnosis in People With Dermatological, Neurological, and Pulmonary Diseases: Systematic Review.

Santana GO, Couto RM, Loureiro RM, Furriel BCRS, de Paula LGN, Rother ET, de Paiva JPQ, Correia LR

pubmed logopapersAug 13 2025
Health care systems around the world face numerous challenges. Recent advances in artificial intelligence (AI) have offered promising solutions, particularly in diagnostic imaging. This systematic review focused on evaluating the economic feasibility of AI in real-world diagnostic imaging scenarios, specifically for dermatological, neurological, and pulmonary diseases. The central question was whether the use of AI in these diagnostic assessments improves economic outcomes and promotes equity in health care systems. This systematic review has 2 main components, economic evaluation and equity assessment. We used the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) tool to ensure adherence to best practices in systematic reviews. The protocol was registered with PROSPERO (International Prospective Register of Systematic Reviews), and we followed the PRISMA-E (Preferred Reporting Items for Systematic Reviews and Meta-Analyses - Equity Extension) guidelines for equity. Scientific articles reporting on economic evaluations or equity considerations related to the use of AI-based tools in diagnostic imaging in dermatology, neurology, or pulmonology were included in the study. The search was conducted in the PubMed, Embase, Scopus, and Web of Science databases. Methodological quality was assessed using the following checklists, CHEC (Consensus on Health Economic Criteria) for economic evaluations, EPHPP (Effective Public Health Practice Project) for equity evaluation studies, and Welte for transferability. The systematic review identified 9 publications within the scope of the research question, with sample sizes ranging from 122 to over 1.3 million participants. The majority of studies addressed economic evaluation (88.9%), with most studies addressing pulmonary diseases (n=6; 66.6%), followed by neurological diseases (n=2; 22.3%), and only 1 (11.1%) study addressing dermatological diseases. These studies had an average quality access of 87.5% on the CHEC checklist. Only 2 studies were found to be transferable to Brazil and other countries with a similar health context. The economic evaluation revealed that 87.5% of studies highlighted the benefits of using AI in dermatology, neurology, and pulmonology, highlighting significant cost-effectiveness outcomes, with the most advantageous being a negative cost-effectiveness ratio of -US $27,580 per QALY (quality-adjusted life year) for melanoma diagnosis, indicating substantial cost savings in this scenario. The only study assessing equity, based on 129,819 radiographic images, identified AI-assisted underdiagnosis, particularly in certain subgroups defined by gender, ethnicity, and socioeconomic status. This review underscores the importance of transparency in the description of AI tools and the representativeness of population subgroups to mitigate health disparities. As AI is rapidly being integrated into health care, detailed assessments are essential to ensure that benefits reach all patients, regardless of sociodemographic factors.
Page 14 of 4064055 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.