Sort by:
Page 145 of 3563559 results

A triple pronged approach for ulcerative colitis severity classification using multimodal, meta, and transformer based learning.

Ahmed MN, Neogi D, Kabir MR, Rahman S, Momen S, Mohammed N

pubmed logopapersJul 26 2025
Ulcerative colitis (UC) is a chronic inflammatory disorder necessitating precise severity stratification to facilitate optimal therapeutic interventions. This study harnesses a triple-pronged deep learning methodology-including multimodal inference pipelines that eliminate domain-specific training, few-shot meta-learning, and Vision Transformer (ViT)-based ensembling-to classify UC severity within the HyperKvasir dataset. We systematically evaluate multiple vision transformer architectures, discovering that a Swin-Base model achieves an accuracy of 90%, while a soft-voting ensemble of diverse ViT backbones boosts performance to 93%. In parallel, we leverage multimodal pre-trained frameworks (e.g., CLIP, BLIP, FLAVA) integrated with conventional machine learning algorithms, yielding an accuracy of 83%. To address limited annotated data, we deploy few-shot meta-learning approaches (e.g., Matching Networks), attaining 83% accuracy in a 5-shot context. Furthermore, interpretability is enhanced via SHapley Additive exPlanations (SHAP), which interpret both local and global model behaviors, thereby fostering clinical trust in the model's inferences. These findings underscore the potential of contemporary representation learning and ensemble strategies for robust UC severity classification, highlighting the pivotal role of model transparency in facilitating medical image analysis.

A Metabolic-Imaging Integrated Model for Prognostic Prediction in Colorectal Liver Metastases

Qinlong Li, Pu Sun, Guanlin Zhu, Tianjiao Liang, Honggang QI

arxiv logopreprintJul 26 2025
Prognostic evaluation in patients with colorectal liver metastases (CRLM) remains challenging due to suboptimal accuracy of conventional clinical models. This study developed and validated a robust machine learning model for predicting postoperative recurrence risk. Preliminary ensemble models achieved exceptionally high performance (AUC $>$ 0.98) but incorporated postoperative features, introducing data leakage risks. To enhance clinical applicability, we restricted input variables to preoperative baseline clinical parameters and radiomic features from contrast-enhanced CT imaging, specifically targeting recurrence prediction at 3, 6, and 12 months postoperatively. The 3-month recurrence prediction model demonstrated optimal performance with an AUC of 0.723 in cross-validation. Decision curve analysis revealed that across threshold probabilities of 0.55-0.95, the model consistently provided greater net benefit than "treat-all" or "treat-none" strategies, supporting its utility in postoperative surveillance and therapeutic decision-making. This study successfully developed a robust predictive model for early CRLM recurrence with confirmed clinical utility. Importantly, it highlights the critical risk of data leakage in clinical prognostic modeling and proposes a rigorous framework to mitigate this issue, enhancing model reliability and translational value in real-world settings.

All-in-One Medical Image Restoration with Latent Diffusion-Enhanced Vector-Quantized Codebook Prior

Haowei Chen, Zhiwen Yang, Haotian Hou, Hui Zhang, Bingzheng Wei, Gang Zhou, Yan Xu

arxiv logopreprintJul 26 2025
All-in-one medical image restoration (MedIR) aims to address multiple MedIR tasks using a unified model, concurrently recovering various high-quality (HQ) medical images (e.g., MRI, CT, and PET) from low-quality (LQ) counterparts. However, all-in-one MedIR presents significant challenges due to the heterogeneity across different tasks. Each task involves distinct degradations, leading to diverse information losses in LQ images. Existing methods struggle to handle these diverse information losses associated with different tasks. To address these challenges, we propose a latent diffusion-enhanced vector-quantized codebook prior and develop \textbf{DiffCode}, a novel framework leveraging this prior for all-in-one MedIR. Specifically, to compensate for diverse information losses associated with different tasks, DiffCode constructs a task-adaptive codebook bank to integrate task-specific HQ prior features across tasks, capturing a comprehensive prior. Furthermore, to enhance prior retrieval from the codebook bank, DiffCode introduces a latent diffusion strategy that utilizes the diffusion model's powerful mapping capabilities to iteratively refine the latent feature distribution, estimating more accurate HQ prior features during restoration. With the help of the task-adaptive codebook bank and latent diffusion strategy, DiffCode achieves superior performance in both quantitative metrics and visual quality across three MedIR tasks: MRI super-resolution, CT denoising, and PET synthesis.

Optimization of deep learning models for inference in low resource environments.

Thakur S, Pati S, Wu J, Panchumarthy R, Karkada D, Kozlov A, Shamporov V, Suslov A, Lyakhov D, Proshin M, Shah P, Makris D, Bakas S

pubmed logopapersJul 26 2025
Artificial Intelligence (AI), and particularly deep learning (DL), has shown great promise to revolutionize healthcare. However, clinical translation is often hindered by demanding hardware requirements. In this study, we assess the effectiveness of optimization techniques for DL models in healthcare applications, targeting varying AI workloads across the domains of radiology, histopathology, and medical RGB imaging, while evaluating across hardware configurations. The assessed AI workloads focus on both segmentation and classification workloads, by virtue of brain extraction in Magnetic Resonance Imaging (MRI), colorectal cancer delineation in Hematoxylin & Eosin (H&E) stained digitized tissue sections, and diabetic foot ulcer classification in RGB images. We quantitatively evaluate model performance in terms of model runtime during inference (including speedup, latency, and memory usage) and model utility on unseen data. Our results demonstrate that optimization techniques can substantially improve model runtime, without compromising model utility. These findings suggest that optimization techniques can facilitate the clinical translation of AI models in low-resource environments, making them more practical for real-world healthcare applications even in underserved regions.

KC-UNIT: Multi-kernel conversion using unpaired image-to-image translation with perceptual guidance in chest computed tomography imaging.

Choi C, Kim D, Park S, Lee H, Kim H, Lee SM, Kim N

pubmed logopapersJul 26 2025
Computed tomography (CT) images are reconstructed from raw datasets including sinogram using various convolution kernels through back projection. Kernels are typically chosen depending on the anatomical structure being imaged and the specific purpose of the scan, balancing the trade-off between image sharpness and pixel noise. Generally, a sinogram requires large storage capacity, and storage space is often limited in clinical settings. Thus, CT images are generally reconstructed with only one specific kernel in clinical settings, and the sinogram is typically discarded after a week. Therefore, many researchers have proposed deep learning-based image-to-image translation methods for CT kernel conversion. However, transferring the style of the target kernel while preserving anatomical structure remains challenging, particularly when translating CT images from a source domain to a target domain in an unpaired manner, which is often encountered in real-world settings. Thus, we propose a novel kernel conversion method using unpaired image-to-image translation (KC-UNIT). This approach utilizes discriminator regularization, using feature maps from the generator to improve semantic representation learning. To capture content and style features, cosine similarity content and contrastive style losses were defined between the feature map of generator and semantic label map of discriminator. This can be easily incorporated by modifying the discriminator's architecture without requiring any additional learnable or pre-trained networks. The KC-UNIT demonstrated the ability to preserve fine-grained anatomical structure from the source domain during transfer. Our method outperformed existing generative adversarial network-based methods across most kernel conversion methods in three kernel domains. The code is available at https://github.com/cychoi97/KC-UNIT.

Artificial intelligence-assisted compressed sensing CINE enhances the workflow of cardiac magnetic resonance in challenging patients.

Wang H, Schmieder A, Watkins M, Wang P, Mitchell J, Qamer SZ, Lanza G

pubmed logopapersJul 26 2025
A key cardiac magnetic resonance (CMR) challenge is breath-holding duration, difficult for cardiac patients. To evaluate whether artificial intelligence-assisted compressed sensing CINE (AI-CS-CINE) reduces image acquisition time of CMR compared to conventional CINE (C-CINE). Cardio-oncology patients (<i>n</i> = 60) and healthy volunteers (<i>n</i> = 29) underwent sequential C-CINE and AI-CS-CINE with a 1.5-T scanner. Acquisition time, visual image quality assessment, and biventricular metrics (end-diastolic volume, end-systolic volume, stroke volume, ejection fraction, left ventricular mass, and wall thickness) were analyzed and compared between C-CINE and AI-CS-CINE with Bland-Altman analysis, and calculation of intraclass coefficient (ICC). In 89 participants (58.5 ± 16.8 years, 42 males, 47 females), total AI-CS-CINE acquisition and reconstruction time (37 seconds) was 84% faster than C-CINE (238 seconds). C-CINE required repeats in 23% (20/89) of cases (approximately 8 minutes lost), while AI-CS-CINE only needed one repeat (1%; 2 seconds lost). AI-CS-CINE had slightly lower contrast but preserved structural clarity. Bland-Altman plots and ICC (0.73 ≤ <i>r</i> ≤ 0.98) showed strong agreement for left ventricle (LV) and right ventricle (RV) metrics, including those in the cardiac amyloidosis subgroup (<i>n</i> = 31). AI-CS-CINE enabled faster, easier imaging in patients with claustrophobia, dyspnea, arrhythmias, or restlessness. Motion-artifacted C-CINE images were reliably interpreted from AI-CS-CINE. AI-CS-CINE accelerated CMR image acquisition and reconstruction, preserved anatomical detail, and diminished impact of patient-related motion. Quantitative AI-CS-CINE metrics agreed closely with C-CINE in cardio-oncology patients, including the cardiac amyloidosis cohort, as well as healthy volunteers regardless of left and right ventricular size and function. AI-CS-CINE significantly enhanced CMR workflow, particularly in challenging cases. The strong analytical concordance underscores reliability and robustness of AI-CS-CINE as a valuable tool.

Synomaly noise and multi-stage diffusion: A novel approach for unsupervised anomaly detection in medical images.

Bi Y, Huang L, Clarenbach R, Ghotbi R, Karlas A, Navab N, Jiang Z

pubmed logopapersJul 26 2025
Anomaly detection in medical imaging plays a crucial role in identifying pathological regions across various imaging modalities, such as brain MRI, liver CT, and carotid ultrasound (US). However, training fully supervised segmentation models is often hindered by the scarcity of expert annotations and the complexity of diverse anatomical structures. To address these issues, we propose a novel unsupervised anomaly detection framework based on a diffusion model that incorporates a synthetic anomaly (Synomaly) noise function and a multi-stage diffusion process. Synomaly noise introduces synthetic anomalies into healthy images during training, allowing the model to effectively learn anomaly removal. The multi-stage diffusion process is introduced to progressively denoise images, preserving fine details while improving the quality of anomaly-free reconstructions. The generated high-fidelity counterfactual healthy images can further enhance the interpretability of the segmentation models, as well as provide a reliable baseline for evaluating the extent of anomalies and supporting clinical decision-making. Notably, the unsupervised anomaly detection model is trained purely on healthy images, eliminating the need for anomalous training samples and pixel-level annotations. We validate the proposed approach on brain MRI, liver CT datasets, and carotid US. The experimental results demonstrate that the proposed framework outperforms existing state-of-the-art unsupervised anomaly detection methods, achieving performance comparable to fully supervised segmentation models in the US dataset. Ablation studies further highlight the contributions of Synomaly noise and the multi-stage diffusion process in improving anomaly segmentation. These findings underscore the potential of our approach as a robust and annotation-efficient alternative for medical anomaly detection. Code:https://github.com/yuan-12138/Synomaly.

Accelerating cardiac radial-MRI: Fully polar based technique using compressed sensing and deep learning.

Ghodrati V, Duan J, Ali F, Bedayat A, Prosper A, Bydder M

pubmed logopapersJul 26 2025
Fast radial-MRI approaches based on compressed sensing (CS) and deep learning (DL) often use non-uniform fast Fourier transform (NUFFT) as the forward imaging operator, which might introduce interpolation errors and reduce image quality. Using the polar Fourier transform (PFT), we developed fully polar CS and DL algorithms for fast 2D cardiac radial-MRI. Our methods directly reconstruct images in polar spatial space from polar k-space data, eliminating frequency interpolation and ensuring an easy-to-compute data consistency term for the DL framework via the variable splitting (VS) scheme. Furthermore, PFT reconstruction produces initial images with fewer artifacts in a reduced field of view, making it a better starting point for CS and DL algorithms, especially for dynamic imaging, where information from a small region of interest is critical, as opposed to NUFFT, which often results in global streaking artifacts. In the cardiac region, PFT-based CS technique outperformed NUFFT-based CS at acceleration rates of 5x (mean SSIM: 0.8831 vs. 0.8526), 10x (0.8195 vs. 0.7981), and 15x (0.7720 vs. 0.7503). Our PFT(VS)-DL technique outperformed the NUFFT(GD)-based DL method, which used unrolled gradient descent with the NUFFT as the forward imaging operator, with mean SSIM scores of 0.8914 versus 0.8617 at 10x and 0.8470 versus 0.8301 at 15x. Radiological assessments revealed that PFT(VS)-based DL scored 2.9±0.30 and 2.73±0.45 at 5x and 10x, whereas NUFFT(GD)-based DL scored 2.7±0.47 and 2.40±0.50, respectively. Our methods suggest a promising alternative to NUFFT-based fast radial-MRI for dynamic imaging, prioritizing reconstruction quality in a small region of interest over whole image quality.

Quantifying physiological variability and improving reproducibility in 4D-flow MRI cerebrovascular measurements with self-supervised deep learning.

Jolicoeur BW, Yardim ZS, Roberts GS, Rivera-Rivera LA, Eisenmenger LB, Johnson KM

pubmed logopapersJul 25 2025
To assess the efficacy of self-supervised deep learning (DL) denoising in reducing measurement variability in 4D-Flow MRI, and to clarify the contributions of physiological variation to cerebrovascular hemodynamics. A self-supervised DL denoising framework was trained on 3D radially sampled 4D-Flow MRI data. The model was evaluated in a prospective test-retest imaging study in which 10 participants underwent multiple 4D-Flow MRI scans. This included back-to-back scans and a single scan interleaved acquisition designed to isolate noise from physiological variations. The effectiveness of DL denoising was assessed by comparing pixelwise velocity and hemodynamic metrics before and after denoising. DL denoising significantly enhanced the reproducibility of 4D-Flow MRI measurements, reducing the 95% confidence interval of cardiac-resolved velocity from 215 to 142 mm/s in back-to-back scans and from 158 to 96 mm/s in interleaved scans, after adjusting for physiological variation. In derived parameters, DL denoising did not significantly improve integrated measures, such as flow rates, but did significantly improve noise sensitive measures, such as pulsatility index. Physiologic variation in back-to-back time-resolved scans contributed 26.37% ± 0.08% and 32.42% ± 0.05% of standard error before and after DL. Self-supervised DL denoising enhances the quantitative repeatability of 4D-Flow MRI by reducing technical noise; however, variations from physiology and post-processing are not removed. These findings underscore the importance of accounting for both technical and physiological variability in neurovascular flow imaging, particularly for studies aiming to establish biomarkers for neurodegenerative diseases with vascular contributions.

Artificial intelligence based fully automatic 3D paranasal sinus segmentation.

Kaygısız Yiğit M, Pınarbaşı A, Etöz M, Duman ŞB, Bayrakdar İŞ

pubmed logopapersJul 25 2025
Precise 3D segmentation of paranasal sinuses is essential for accurate diagnosis and treatment. This study aimed to develop a fully automated segmentation algorithm for the paranasal sinuses using the nnU-Net v2 architecture. The nnU-Net v2-based segmentation algorithm was developed using Python 3.6.1 and the PyTorch library, and its performance was evaluated on a dataset of 97 cone-beam computed tomography (CBCT) scans. Ground truth annotations were manually generated by expert radiologists using the 3D Slicer software, employing a polygonal labeling technique across sagittal, coronal, and axial planes. Model performance was assessed using several quantitative metrics, including accuracy, Dice Coefficient (DC), sensitivity, precision, Jaccard Index, Area Under the Curve (AUC), and 95% Hausdorff Distance (95% HD). The nnU-Net v2-based algorithm demonstrated high segmentation performance across all paranasal sinuses. Dice Coefficient (DC) values were 0.94 for the frontal, 0.95 for the sphenoid, 0.97 for the maxillary, and 0.88 for the ethmoid sinuses. Accuracy scores exceeded 99% for all sinuses. The 95% Hausdorff Distance (95% HD) values were 0.51 mm for both the frontal and maxillary sinuses, 0.85 mm for the sphenoid sinus, and 1.17 mm for the ethmoid sinus. Jaccard indices were 0.90, 0.91, 0.94, and 0.80, respectively. This study highlights the high accuracy and precision of the nnU-Net v2-based CNN model in the fully automated segmentation of paranasal sinuses from CBCT images. The results suggest that the proposed model can significantly contribute to clinical decision-making processes, facilitating diagnostic and therapeutic procedures.
Page 145 of 3563559 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.