Sort by:
Page 1 of 19 results

2.5D Multi-view Averaging Diffusion Model for 3D Medical Image Translation: Application to Low-count PET Reconstruction with CT-less Attenuation Correction.

Chen T, Hou J, Zhou Y, Xie H, Chen X, Liu Q, Guo X, Xia M, Duncan JS, Liu C, Zhou B

pubmed logopapersMay 15 2025
Positron Emission Tomography (PET) is an important clinical imaging tool but inevitably introduces radiation exposure to patients and healthcare providers. Reducing the tracer injection dose and eliminating the CT acquisition for attenuation correction can reduce the overall radiation dose, but often results in PET with high noise and bias. Thus, it is desirable to develop 3D methods to translate the non-attenuation-corrected low-dose PET (NAC-LDPET) into attenuation-corrected standard-dose PET (AC-SDPET). Recently, diffusion models have emerged as a new state-of-the-art deep learning method for image-to-image translation, better than traditional CNN-based methods. However, due to the high computation cost and memory burden, it is largely limited to 2D applications. To address these challenges, we developed a novel 2.5D Multi-view Averaging Diffusion Model (MADM) for 3D image-to-image translation with application on NAC-LDPET to AC-SDPET translation. Specifically, MADM employs separate diffusion models for axial, coronal, and sagittal views, whose outputs are averaged in each sampling step to ensure the 3D generation quality from multiple views. To accelerate the 3D sampling process, we also proposed a strategy to use the CNN-based 3D generation as a prior for the diffusion model. Our experimental results on human patient studies suggested that MADM can generate high-quality 3D translation images, outperforming previous CNN-based and Diffusion-based baseline methods. The code is available at https://github.com/tianqic/MADM.

Recognizing artery segments on carotid ultrasonography using embedding concatenation of deep image and vision-language models.

Lo CM, Sung SF

pubmed logopapersMay 14 2025
Evaluating large artery atherosclerosis is critical for predicting and preventing ischemic strokes. Ultrasonographic assessment of the carotid arteries is the preferred first-line examination due to its ease of use, noninvasive, and absence of radiation exposure. This study proposed an automated classification model for the common carotid artery (CCA), carotid bulb, internal carotid artery (ICA), and external carotid artery (ECA) to enhance the quantification of carotid artery examinations.&#xD;Approach: A total of 2,943 B-mode ultrasound images (CCA: 1,563; bulb: 611; ICA: 476; ECA: 293) from 288 patients were collected. Three distinct sets of embedding features were extracted from artificial intelligence networks including pre-trained DenseNet201, vision Transformer (ViT), and echo contrastive language-image pre-training (EchoCLIP) models using deep learning architectures for pattern recognition. These features were then combined in a support vector machine (SVM) classifier to interpret the anatomical structures in B-mode images.&#xD;Main results: After ten-fold cross-validation, the model achieved an accuracy of 82.3%, which was significantly better than using individual feature sets, with a p-value of <0.001.&#xD;Significance: The proposed model could make carotid artery examinations more accurate and consistent with the achieved classification accuracy. The source code is available at https://github.com/buddykeywordw/Artery-Segments-Recognition&#xD.

Error correcting 2D-3D cascaded network for myocardial infarct scar segmentation on late gadolinium enhancement cardiac magnetic resonance images.

Schwab M, Pamminger M, Kremser C, Obmann D, Haltmeier M, Mayr A

pubmed logopapersMay 10 2025
Late gadolinium enhancement (LGE) cardiac magnetic resonance (CMR) imaging is considered the in vivo reference standard for assessing infarct size (IS) and microvascular obstruction (MVO) in ST-elevation myocardial infarction (STEMI) patients. However, the exact quantification of those markers of myocardial infarct severity remains challenging and very time-consuming. As LGE distribution patterns can be quite complex and hard to delineate from the blood pool or epicardial fat, automatic segmentation of LGE CMR images is challenging. In this work, we propose a cascaded framework of two-dimensional and three-dimensional convolutional neural networks (CNNs) which enables to calculate the extent of myocardial infarction in a fully automated way. By artificially generating segmentation errors which are characteristic for 2D CNNs during training of the cascaded framework we are enforcing the detection and correction of 2D segmentation errors and hence improve the segmentation accuracy of the entire method. The proposed method was trained and evaluated on two publicly available datasets. We perform comparative experiments where we show that our framework outperforms state-of-the-art reference methods in segmentation of myocardial infarction. Furthermore, in extensive ablation studies we show the advantages that come with the proposed error correcting cascaded method. The code of this project is publicly available at https://github.com/matthi99/EcorC.git.

Evaluating an information theoretic approach for selecting multimodal data fusion methods.

Zhang T, Ding R, Luong KD, Hsu W

pubmed logopapersMay 10 2025
Interest has grown in combining radiology, pathology, genomic, and clinical data to improve the accuracy of diagnostic and prognostic predictions toward precision health. However, most existing works choose their datasets and modeling approaches empirically and in an ad hoc manner. A prior study proposed four partial information decomposition (PID)-based metrics to provide a theoretical understanding of multimodal data interactions: redundancy, uniqueness of each modality, and synergy. However, these metrics have only been evaluated in a limited collection of biomedical data, and the existing work does not elucidate the effect of parameter selection when calculating the PID metrics. In this work, we evaluate PID metrics on a wider range of biomedical data, including clinical, radiology, pathology, and genomic data, and propose potential improvements to the PID metrics. We apply the PID metrics to seven different modality pairs across four distinct cohorts (datasets). We compare and interpret trends in the resulting PID metrics and downstream model performance in these multimodal cohorts. The downstream tasks being evaluated include predicting the prognosis (either overall survival or recurrence) of patients with non-small cell lung cancer, prostate cancer, and glioblastoma. We found that, while PID metrics are informative, solely relying on these metrics to decide on a fusion approach does not always yield a machine learning model with optimal performance. Of the seven different modality pairs, three had poor (0%), three had moderate (66%-89%), and only one had perfect (100%) consistency between the PID values and model performance. We propose two improvements to the PID metrics (determining the optimal parameters and uncertainty estimation) and identified areas where PID metrics could be further improved. The current PID metrics are not accurate enough for estimating the multimodal data interactions and need to be improved before they can serve as a reliable tool. We propose improvements and provide suggestions for future work. Code: https://github.com/zhtyolivia/pid-multimodal.

Deep compressed multichannel adaptive optics scanning light ophthalmoscope.

Park J, Hagan K, DuBose TB, Maldonado RS, McNabb RP, Dubra A, Izatt JA, Farsiu S

pubmed logopapersMay 9 2025
Adaptive optics scanning light ophthalmoscopy (AOSLO) reveals individual retinal cells and their function, microvasculature, and micropathologies in vivo. As compared to the single-channel offset pinhole and two-channel split-detector nonconfocal AOSLO designs, by providing multidirectional imaging capabilities, a recent generation of multidetector and (multi-)offset aperture AOSLO modalities has been demonstrated to provide critical information about retinal microstructures. However, increasing detection channels requires expensive optical components and/or critically increases imaging time. To address this issue, we present an innovative combination of machine learning and optics as an integrated technology to compressively capture 12 nonconfocal channel AOSLO images simultaneously. Imaging of healthy participants and diseased subjects using the proposed deep compressed multichannel AOSLO showed enhanced visualization of rods, cones, and mural cells with over an order-of-magnitude improvement in imaging speed as compared to conventional offset aperture imaging. To facilitate the adaptation and integration with other in vivo microscopy systems, we made optical design, acquisition, and computational reconstruction codes open source.

Comparative analysis of open-source against commercial AI-based segmentation models for online adaptive MR-guided radiotherapy.

Langner D, Nachbar M, Russo ML, Boeke S, Gani C, Niyazi M, Thorwarth D

pubmed logopapersMay 8 2025
Online adaptive magnetic resonance-guided radiotherapy (MRgRT) has emerged as a state-of-the-art treatment option for multiple tumour entities, accounting for daily anatomical and tumour volume changes, thus allowing sparing of relevant organs at risk (OARs). However, the annotation of treatment-relevant anatomical structures in context of online plan adaptation remains challenging, often relying on commercial segmentation solutions due to limited availability of clinically validated alternatives. The aim of this study was to investigate whether an open-source artificial intelligence (AI) segmentation network can compete with the annotation accuracy of a commercial solution, both trained on the identical dataset, questioning the need for commercial models in clinical practice. For 47 pelvic patients, T2w MR imaging data acquired on a 1.5 T MR-Linac were manually contoured, identifying prostate, seminal vesicles, rectum, anal canal, bladder, penile bulb, and bony structures. These training data were used for the generation of an in-house AI segmentation model, a nnU-Net with residual encoder architecture featuring a streamlined single image inference pipeline, and re-training of a commercial solution. For quantitative evaluation, 20 MR images were contoured by a radiation oncologist, considered as ground truth contours (GTC) and compared with the in-house/commercial AI-based contours (iAIC/cAIC) using Dice Similarity Coefficient (DSC), 95% Hausdorff distances (HD95), and surface DSC (sDSC). For qualitative evaluation, four radiation oncologists assessed the usability of OAR/target iAIC within an online adaptive workflow using a four-point Likert scale: (1) acceptable without modification, (2) requiring minor adjustments, (3) requiring major adjustments, and (4) not usable. Patient-individual annotations were generated in a median [range] time of 23 [16-34] s for iAIC and 152 [121-198] s for cAIC, respectively. OARs showed a maximum median DSC of 0.97/0.97 (iAIC/cAIC) for bladder and minimum median DSC of 0.78/0.79 (iAIC/cAIC) for anal canal/penile bulb. Maximal respectively minimal median HD95 were detected for rectum with 17.3/20.6 mm (iAIC/cAIC) and for bladder with 5.6/6.0 mm (iAIC/cAIC). Overall, the average median DSC/HD95 values were 0.87/11.8mm (iAIC) and 0.83/10.2mm (cAIC) for OAR/targets and 0.90/11.9mm (iAIC) and 0.91/16.5mm (cAIC) for bony structures. For a tolerance of 3 mm, the highest and lowest sDSC were determined for bladder (iAIC:1.00, cAIC:0.99) and prostate in iAIC (0.89) and anal canal in cAIC (0.80), respectively. Qualitatively, 84.8% of analysed contours were considered as clinically acceptable for iAIC, while 12.9% required minor and 2.3% major adjustments or were classed as unusable. Contour-specific analysis showed that iAIC achieved the highest mean scores with 1.00 for the anal canal and the lowest with 1.61 for the prostate. This study demonstrates that open-source segmentation framework can achieve comparable annotation accuracy to commercial solutions for pelvic anatomy in online adaptive MRgRT. The adapted framework not only maintained high segmentation performance, with 84.8% of contours accepted by physicians or requiring only minor corrections (12.9%) but also enhanced clinical workflow efficiency of online adaptive MRgRT through reduced inference times. These findings establish open-source frameworks as viable alternatives to commercial systems in supervised clinical workflows.

Weakly supervised language models for automated extraction of critical findings from radiology reports.

Das A, Talati IA, Chaves JMZ, Rubin D, Banerjee I

pubmed logopapersMay 8 2025
Critical findings in radiology reports are life threatening conditions that need to be communicated promptly to physicians for timely management of patients. Although challenging, advancements in natural language processing (NLP), particularly large language models (LLMs), now enable the automated identification of key findings from verbose reports. Given the scarcity of labeled critical findings data, we implemented a two-phase, weakly supervised fine-tuning approach on 15,000 unlabeled Mayo Clinic reports. This fine-tuned model then automatically extracted critical terms on internal (Mayo Clinic, n = 80) and external (MIMIC-III, n = 123) test datasets, validated against expert annotations. Model performance was further assessed on 5000 MIMIC-IV reports using LLM-aided metrics, G-eval and Prometheus. Both manual and LLM-based evaluations showed improved task alignment with weak supervision. The pipeline and model, publicly available under an academic license, can aid in critical finding extraction for research and clinical use ( https://github.com/dasavisha/CriticalFindings_Extract ).

ChatOCT: Embedded Clinical Decision Support Systems for Optical Coherence Tomography in Offline and Resource-Limited Settings.

Liu C, Zhang H, Zheng Z, Liu W, Gu C, Lan Q, Zhang W, Yang J

pubmed logopapersMay 7 2025
Optical Coherence Tomography (OCT) is a critical imaging modality for diagnosing ocular and systemic conditions, yet its accessibility is hindered by the need for specialized expertise and high computational demands. To address these challenges, we introduce ChatOCT, an offline-capable, domain-adaptive clinical decision support system (CDSS) that integrates structured expert Q&A generation, OCT-specific knowledge injection, and activation-aware model compression. Unlike existing systems, ChatOCT functions without internet access, making it suitable for low-resource environments. ChatOCT is built upon LLaMA-2-7B, incorporating domain-specific knowledge from PubMed and OCT News through a two-stage training process: (1) knowledge injection for OCT-specific expertise and (2) Q&A instruction tuning for structured, interactive diagnostic reasoning. To ensure feasibility in offline environments, we apply activation-aware weight quantization, reducing GPU memory usage to ~ 4.74 GB, enabling deployment on standard OCT hardware. A novel expert answer generation framework mitigates hallucinations by structuring responses in a multi-step process, ensuring accuracy and interpretability. ChatOCT outperforms state-of-the-art baselines such as LLaMA-2, PMC-LLaMA-13B, and ChatDoctor by 10-15 points in coherence, relevance, and clinical utility, while reducing GPU memory requirements by 79%, while maintaining real-time responsiveness (~ 20 ms inference time). Expert ophthalmologists rated ChatOCT's outputs as clinically actionable and aligned with real-world decision-making needs, confirming its potential to assist frontline healthcare providers. ChatOCT represents an innovative offline clinical decision support system for optical coherence tomography (OCT) that runs entirely on local embedded hardware, enabling real-time analysis in resource-limited settings without internet connectivity. By offering a scalable, generalizable pipeline that integrates knowledge injection, instruction tuning, and model compression, ChatOCT provides a blueprint for next-generation, resource-efficient clinical AI solutions across multiple medical domains.

Enhancing efficient deep learning models with multimodal, multi-teacher insights for medical image segmentation.

Hossain KF, Kamran SA, Ong J, Tavakkoli A

pubmed logopapersMay 7 2025
The rapid evolution of deep learning has dramatically enhanced the field of medical image segmentation, leading to the development of models with unprecedented accuracy in analyzing complex medical images. Deep learning-based segmentation holds significant promise for advancing clinical care and enhancing the precision of medical interventions. However, these models' high computational demand and complexity present significant barriers to their application in resource-constrained clinical settings. To address this challenge, we introduce Teach-Former, a novel knowledge distillation (KD) framework that leverages a Transformer backbone to effectively condense the knowledge of multiple teacher models into a single, streamlined student model. Moreover, it excels in the contextual and spatial interpretation of relationships across multimodal images for more accurate and precise segmentation. Teach-Former stands out by harnessing multimodal inputs (CT, PET, MRI) and distilling the final predictions and the intermediate attention maps, ensuring a richer spatial and contextual knowledge transfer. Through this technique, the student model inherits the capacity for fine segmentation while operating with a significantly reduced parameter set and computational footprint. Additionally, introducing a novel training strategy optimizes knowledge transfer, ensuring the student model captures the intricate mapping of features essential for high-fidelity segmentation. The efficacy of Teach-Former has been effectively tested on two extensive multimodal datasets, HECKTOR21 and PI-CAI22, encompassing various image types. The results demonstrate that our KD strategy reduces the model complexity and surpasses existing state-of-the-art methods to achieve superior performance. The findings of this study indicate that the proposed methodology could facilitate efficient segmentation of complex multimodal medical images, supporting clinicians in achieving more precise diagnoses and comprehensive monitoring of pathological conditions ( https://github.com/FarihaHossain/TeachFormer ).
Page 1 of 19 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.