Sort by:
Page 1 of 30296 results
Next

TFKT V2: task-focused knowledge transfer from natural images for computed tomography perceptual image quality assessment.

Rifa KR, Ahamed MA, Zhang J, Imran A

pubmed logopapersSep 1 2025
The accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic reliability while minimizing radiation dose. Radiologists' evaluations are time-consuming and labor-intensive. Existing automated approaches often require large CT datasets with predefined image quality assessment (IQA) scores, which often do not align well with clinical evaluations. We aim to develop a reference-free, automated method for CT IQA that closely reflects radiologists' evaluations, reducing the dependency on large annotated datasets. We propose Task-Focused Knowledge Transfer (TFKT), a deep learning-based IQA method leveraging knowledge transfer from task-similar natural image datasets. TFKT incorporates a hybrid convolutional neural network-transformer model, enabling accurate quality predictions by learning from natural image distortions with human-annotated mean opinion scores. The model is pre-trained on natural image datasets and fine-tuned on low-dose computed tomography perceptual image quality assessment data to ensure task-specific adaptability. Extensive evaluations demonstrate that the proposed TFKT method effectively predicts IQA scores aligned with radiologists' assessments on in-domain datasets and generalizes well to out-of-domain clinical pediatric CT exams. The model achieves robust performance without requiring high-dose reference images. Our model is capable of assessing the quality of <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>∼</mo> <mn>30</mn></mrow> </math> CT image slices in a second. The proposed TFKT approach provides a scalable, accurate, and reference-free solution for CT IQA. The model bridges the gap between traditional and deep learning-based IQA, offering clinically relevant and computationally efficient assessments applicable to real-world clinical settings.

FairICP: identifying biases and increasing transparency at the point of care in post-implementation clinical decision support using inductive conformal prediction.

Sun X, Nakashima M, Nguyen C, Chen PH, Tang WHW, Kwon D, Chen D

pubmed logopapersJun 15 2025
Fairness concerns stemming from known and unknown biases in healthcare practices have raised questions about the trustworthiness of Artificial Intelligence (AI)-driven Clinical Decision Support Systems (CDSS). Studies have shown unforeseen performance disparities in subpopulations when applied to clinical settings different from training. Existing unfairness mitigation strategies often struggle with scalability and accessibility, while their pursuit of group-level prediction performance parity does not effectively translate into fairness at the point of care. This study introduces FairICP, a flexible and cost-effective post-implementation framework based on Inductive Conformal Prediction (ICP), to provide users with actionable knowledge of model uncertainty due to subpopulation level biases at the point of care. FairICP applies ICP to identify the model's scope of competence through group specific calibration, ensuring equitable prediction reliability by filtering predictions that fall within the trusted competence boundaries. We evaluated FairICP against four benchmarks on three medical imaging modalities: (1) Cardiac Magnetic Resonance Imaging (MRI), (2) Chest X-ray and (3) Dermatology Imaging, acquired from both private and large public datasets. Frameworks are assessed on prediction performance enhancement and unfairness mitigation capabilities. Compared to the baseline, FairICP improved prediction accuracy by 7.2% and reduced the accuracy gap between the privileged and unprivileged subpopulations by 2.2% on average across all three datasets. Our work provides a robust solution to promote trust and transparency in AI-CDSS, fostering equality and equity in healthcare for diverse patient populations. Such post-process methods are critical to enabling a robust framework for AI-CDSS implementation and monitoring for healthcare settings.

Artificial intelligence for age-related macular degeneration diagnosis in Australia: A Novel Qualitative Interview Study.

Ly A, Herse S, Williams MA, Stapleton F

pubmed logopapersJun 14 2025
Artificial intelligence (AI) systems for age-related macular degeneration (AMD) diagnosis abound but are not yet widely implemented. AI implementation is complex, requiring the involvement of multiple, diverse stakeholders including technology developers, clinicians, patients, health networks, public hospitals, private providers and payers. There is a pressing need to investigate how AI might be adopted to improve patient outcomes. The purpose of this first study of its kind was to use the AI translation extended version of the non-adoption, abandonment, scale-up, spread and sustainability of healthcare technologies framework to explore stakeholder experiences, attitudes, enablers, barriers and possible futures of digital diagnosis using AI for AMD and eyecare in Australia. Semi-structured, online interviews were conducted with 37 stakeholders (12 clinicians, 10 healthcare leaders, 8 patients and 7 developers) from September 2022 to March 2023. The interviews were audio-recorded, transcribed and analysed using directed and summative content analysis. Technological features influencing implementation were most frequently discussed, followed by the context or wider system, value proposition, adopters, organisations, the condition and finally embedding the adaptation. Patients preferred to focus on the condition, while healthcare leaders elaborated on organisation factors. Overall, stakeholders supported a portable, device-independent clinical decision support tool that could be integrated with existing diagnostic equipment and patient management systems. Opportunities for AI to drive new models of healthcare, patient education and outreach, and the importance of maintaining equity across population groups were consistently emphasised. This is the first investigation to report numerous, interacting perspectives on the adoption of digital diagnosis for AMD in Australia, incorporating an intentionally diverse stakeholder group and the patient voice. It provides a series of practical considerations for the implementation of AI and digital diagnosis into existing care for people with AMD.

FDTooth: Intraoral Photographs and CBCT Images for Fenestration and Dehiscence Detection.

Liu K, Elbatel M, Chu G, Shan Z, Sum FHKMH, Hung KF, Zhang C, Li X, Yang Y

pubmed logopapersJun 14 2025
Fenestration and dehiscence (FD) pose significant challenges in dental treatments as they adversely affect oral health. Although cone-beam computed tomography (CBCT) provides precise diagnostics, its extensive time requirements and radiation exposure limit its routine use for monitoring. Currently, there is no public dataset that combines intraoral photographs and corresponding CBCT images; this limits the development of deep learning algorithms for the automated detection of FD and other potential diseases. In this paper, we present FDTooth, a dataset that includes both intraoral photographs and CBCT images of 241 patients aged between 9 and 55 years. FDTooth contains 1,800 precise bounding boxes annotated on intraoral photographs, with gold-standard ground truth extracted from CBCT. We developed a baseline model for automated FD detection in intraoral photographs. The developed dataset and model can serve as valuable resources for research on interdisciplinary dental diagnostics, offering clinicians a non-invasive, efficient method for early FD screening without invasive procedures.

High-Fidelity 3D Imaging of Dental Scenes Using Gaussian Splatting.

Jin CX, Li MX, Yu H, Gao Y, Guo YP, Xia GS, Huang C

pubmed logopapersJun 13 2025
Three-dimensional visualization is increasingly used in dentistry for diagnostics, education, and treatment design. The accurate replication of geometry and color is crucial for these applications. Image-based rendering, which uses 2-dimensional photos to generate photo-realistic 3-dimensional representations, provides an affordable and practical option, aiding both regular and remote health care. This study explores an advanced novel view synthesis (NVS) method called Gaussian splatting (GS), a differentiable image-based rendering approach, to assess its feasibility for dental scene capturing. The rendering quality and resource usage were compared with representative NVS methods. In addition, the linear measurement trueness of extracted craniofacial meshes was evaluated against a commercial facial scanner and 3 smartphone facial scanning apps, while teeth meshes were assessed against 2 intraoral scanners and a desktop scanner. GS-based representation demonstrated superior rendering quality, achieving the highest visual quality, fastest rendering speed, and lowest resource usage. The craniofacial measurements showed similar trueness to commercial facial scanners. The dental measurements had larger deviations than intraoral and desktop scanners did, although all deviations remained within clinically acceptable limits. The GS-based representation shows great potential for developing a convenient and cost-effective method of capturing dental scenes, offering a balance between color fidelity and trueness suitable for clinical applications.

Inference of single cell profiles from histology stains with the Single-Cell omics from Histology Analysis Framework (SCHAF)

Comiter, C., Chen, X., Vaishnav, E. D., Kobayashi-Kirschvink, K. J., Ciapmricotti, M., Zhang, K., Murray, J., Monticolo, F., Qi, J., Tanaka, R., Brodowska, S. E., Li, B., Yang, Y., Rodig, S. J., Karatza, A., Quintanal Villalonga, A., Turner, M., Pfaff, K. L., Jane-Valbuena, J., Slyper, M., Waldman, J., Vigneau, S., Wu, J., Blosser, T. R., Segerstolpe, A., Abravanel, D., Wagle, N., Demehri, S., Zhuang, X., Rudin, C. M., Klughammer, J., Rozenblatt-Rosen, O., Stultz, C. M., Shu, J., Regev, A.

biorxiv logopreprintJun 13 2025
Tissue biology involves an intricate balance between cell-intrinsic processes and interactions between cells organized in specific spatial patterns, which can be respectively captured by single cell profiling methods, such as single cell RNA-seq (scRNA-seq) and spatial transcriptomics, and histology imaging data, such as Hematoxylin-and-Eosin (H&E) stains. While single cell profiles provide rich molecular information, they can be challenging to collect routinely in the clinic and either lack spatial resolution or high gene throughput. Conversely, histological H&E assays have been a cornerstone of tissue pathology for decades, but do not directly report on molecular details, although the observed structure they capture arises from molecules and cells. Here, we leverage vision transformers and adversarial deep learning to develop the Single Cell omics from Histology Analysis Framework (SCHAF), which generates a tissue sample's spatially-resolved whole transcriptome single cell omics dataset from its H&E histology image. We demonstrate SCHAF on a variety of tissues--including lung cancer, metastatic breast cancer, placentae, and whole mouse pups--training with matched samples analyzed by sc/snRNA-seq, H&E staining, and, when available, spatial transcriptomics. SCHAF generated appropriate single cell profiles from histology images in test data, related them spatially, and compared well to ground-truth scRNA-Seq, expert pathologist annotations, or direct spatial transcriptomic measurements, with some limitations. SCHAF opens the way to next-generation H&E analyses and an integrated understanding of cell and tissue biology in health and disease.

Quantitative and qualitative assessment of ultra-low-dose paranasal sinus CT using deep learning image reconstruction: a comparison with hybrid iterative reconstruction.

Otgonbaatar C, Lee D, Choi J, Jang H, Shim H, Ryoo I, Jung HN, Suh S

pubmed logopapersJun 13 2025
This study aimed to evaluate the quantitative and qualitative performances of ultra-low-dose computed tomography (CT) with deep learning image reconstruction (DLR) compared with those of hybrid iterative reconstruction (IR) for preoperative paranasal sinus (PNS) imaging. This retrospective analysis included 132 patients who underwent non-contrast ultra-low-dose sinus CT (0.03 mSv). Images were reconstructed using hybrid IR and DLR. Objective image quality metrics, including image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), noise power spectrum (NPS), and no-reference perceptual image sharpness, were assessed. Two board-certified radiologists independently performed subjective image quality evaluations. The ultra-low-dose CT protocol achieved a low radiation dose (effective dose: 0.03 mSv). DLR showed significantly lower image noise (28.62 ± 4.83 Hounsfield units) compared to hybrid IR (140.70 ± 16.04, p < 0.001), with DLR yielding smoother and more uniform images. DLR demonstrated significantly improved SNR (22.47 ± 5.82 vs 9.14 ± 2.45, p < 0.001) and CNR (71.88 ± 14.03 vs 11.81 ± 1.50, p < 0.001). NPS analysis revealed that DLR reduced the noise magnitude and NPS peak values. Additionally, DLR demonstrated significantly sharper images (no-reference perceptual sharpness metric: 0.56 ± 0.04) compared to hybrid IR (0.36 ± 0.01). Radiologists rated DLR as superior in overall image quality, bone structure visualization, and diagnostic confidence compared to hybrid IR at ultra-low-dose CT. DLR significantly outperformed hybrid IR in ultra-low-dose PNS CT by reducing image noise, improving SNR and CNR, enhancing image sharpness, and maintaining critical anatomical visualization, demonstrating its potential for effective preoperative planning with minimal radiation exposure. Question Ultra-low-dose CT for paranasal sinuses is essential for patients requiring repeated scans and functional endoscopic sinus surgery (FESS) planning to reduce cumulative radiation exposure. Findings DLR outperformed hybrid IR in ultra-low-dose paranasal sinus CT. Clinical relevance Ultra-low-dose CT with DLR delivers sufficient image quality for detailed surgical planning, effectively minimizing unnecessary radiation exposure to enhance patient safety.

3D Skin Segmentation Methods in Medical Imaging: A Comparison

Martina Paccini, Giuseppe Patanè

arxiv logopreprintJun 13 2025
Automatic segmentation of anatomical structures is critical in medical image analysis, aiding diagnostics and treatment planning. Skin segmentation plays a key role in registering and visualising multimodal imaging data. 3D skin segmentation enables applications in personalised medicine, surgical planning, and remote monitoring, offering realistic patient models for treatment simulation, procedural visualisation, and continuous condition tracking. This paper analyses and compares algorithmic and AI-driven skin segmentation approaches, emphasising key factors to consider when selecting a strategy based on data availability and application requirements. We evaluate an iterative region-growing algorithm and the TotalSegmentator, a deep learning-based approach, across different imaging modalities and anatomical regions. Our tests show that AI segmentation excels in automation but struggles with MRI due to its CT-based training, while the graphics-based method performs better for MRIs but introduces more noise. AI-driven segmentation also automates patient bed removal in CT, whereas the graphics-based method requires manual intervention.

Taming Stable Diffusion for Computed Tomography Blind Super-Resolution

Chunlei Li, Yilei Shi, Haoxi Hu, Jingliang Hu, Xiao Xiang Zhu, Lichao Mou

arxiv logopreprintJun 13 2025
High-resolution computed tomography (CT) imaging is essential for medical diagnosis but requires increased radiation exposure, creating a critical trade-off between image quality and patient safety. While deep learning methods have shown promise in CT super-resolution, they face challenges with complex degradations and limited medical training data. Meanwhile, large-scale pre-trained diffusion models, particularly Stable Diffusion, have demonstrated remarkable capabilities in synthesizing fine details across various vision tasks. Motivated by this, we propose a novel framework that adapts Stable Diffusion for CT blind super-resolution. We employ a practical degradation model to synthesize realistic low-quality images and leverage a pre-trained vision-language model to generate corresponding descriptions. Subsequently, we perform super-resolution using Stable Diffusion with a specialized controlling strategy, conditioned on both low-resolution inputs and the generated text descriptions. Extensive experiments show that our method outperforms existing approaches, demonstrating its potential for achieving high-quality CT imaging at reduced radiation doses. Our code will be made publicly available.

Exploring the Effectiveness of Deep Features from Domain-Specific Foundation Models in Retinal Image Synthesis

Zuzanna Skorniewska, Bartlomiej W. Papiez

arxiv logopreprintJun 13 2025
The adoption of neural network models in medical imaging has been constrained by strict privacy regulations, limited data availability, high acquisition costs, and demographic biases. Deep generative models offer a promising solution by generating synthetic data that bypasses privacy concerns and addresses fairness by producing samples for under-represented groups. However, unlike natural images, medical imaging requires validation not only for fidelity (e.g., Fr\'echet Inception Score) but also for morphological and clinical accuracy. This is particularly true for colour fundus retinal imaging, which requires precise replication of the retinal vascular network, including vessel topology, continuity, and thickness. In this study, we in-vestigated whether a distance-based loss function based on deep activation layers of a large foundational model trained on large corpus of domain data, colour fundus imaging, offers advantages over a perceptual loss and edge-detection based loss functions. Our extensive validation pipeline, based on both domain-free and domain specific tasks, suggests that domain-specific deep features do not improve autoen-coder image generation. Conversely, our findings highlight the effectiveness of con-ventional edge detection filters in improving the sharpness of vascular structures in synthetic samples.
Page 1 of 30296 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.