Sort by:
Page 356 of 6646636 results

Sikkandar MY, Sundaram SG, Almeshari MN, Begum SS, Sankari ES, Alduraywish YA, Obidallah WJ, Alotaibi FM

pubmed logopapersJul 19 2025
Lymphoma poses a critical health challenge worldwide, demanding computer aided solutions towards diagnosis, treatment, and research to significantly enhance patient outcomes and combat this pervasive disease. Accurate classification of lymphoma subtypes from Whole Slide Images (WSIs) remains a complex challenge due to morphological similarities among subtypes and the limitations of models that fail to jointly capture local and global features. Traditional diagnostic methods, limited by subjectivity and inconsistencies, highlight the need for advanced, Artificial Intelligence (AI)-driven solutions. This study proposes a hybrid deep learning framework-Hybrid Convolutional and Transformer Network for Lymphoma Classification (HCTN-LC)-designed to enhance the precision and interpretability of lymphoma subtype classification. The model employs a dual-pathway architecture that combines a lightweight SqueezeNet for local feature extraction with a Vision Transformer (ViT) for capturing global context. A Feature Fusion and Enhancement Module (FFEM) is introduced to dynamically integrate features from both pathways. The model is trained and evaluated on a large WSI dataset encompassing three lymphoma subtypes: CLL, FL, and MCL. HCTN-LC achieves superior performance with an overall accuracy of 99.87%, sensitivity of 99.87%, specificity of 99.93%, and AUC of 0.9991, outperforming several recent hybrid models. Grad-CAM visualizations confirm the model's focus on diagnostically relevant regions. The proposed HCTN-LC demonstrates strong potential for real-time and low-resource clinical deployment, offering a robust and interpretable AI tool for hematopathological diagnosis.

Yu T, Chen K

pubmed logopapersJul 19 2025
Cardiovascular illnesses continue to be a predominant cause of mortality globally, underscoring the necessity for prompt and precise diagnosis to mitigate consequences and healthcare expenditures. This work presents a complete hybrid methodology that integrates machine learning techniques with medical image analysis to improve the identification of cardiovascular diseases. This research integrates many imaging modalities such as echocardiography, cardiac MRI, and chest radiographs with patient health records, enhancing diagnosis accuracy beyond standard techniques that depend exclusively on numerical clinical data. During the preprocessing phase, essential visual elements are collected from medical pictures utilizing image processing methods and convolutional neural networks (CNNs). These are subsequently integrated with clinical characteristics and input into various machine learning classifiers, including Support Vector Machines (SVM), Random Forest (RF), XGBoost, and Deep Neural Networks (DNNs), to differentiate between healthy persons and patients with cardiovascular illnesses. The proposed method attained a remarkable diagnostic accuracy of up to 96%, exceeding models reliant exclusively on clinical data. This study highlights the capability of integrating artificial intelligence with medical imaging to create a highly accurate and non-invasive diagnostic instrument for cardiovascular disease.

Salmanpour MR, Mousavi A, Xu Y, Weeks WB, Hacihaliloglu I

pubmed logopapersJul 19 2025
Image-to-image (I2I) translation networks have emerged as promising tools for generating synthetic medical images; however, their clinical reliability and ability to preserve diagnostically relevant features remain underexplored. This study evaluates the performance of state-of-the-art 2D/3D I2I networks for converting ultrasound (US) images to synthetic MRI in prostate cancer (PCa) imaging. The novelty lies in combining radiomics, expert clinical evaluation, and classification performance to comprehensively benchmark these models for potential integration into real-world diagnostic workflows. A dataset of 794 PCa patients was analyzed using ten leading I2I networks to synthesize MRI from US input. Radiomics feature (RF) analysis was performed using Spearman correlation to assess whether high-performing networks (SSIM > 0.85) preserved quantitative imaging biomarkers. A qualitative evaluation by seven experienced physicians assessed the anatomical realism, presence of artifacts, and diagnostic interpretability of synthetic images. Additionally, classification tasks using synthetic images were conducted using two machine learning and one deep learning model to assess the practical diagnostic benefit. Among all networks, 2D-Pix2Pix achieved the highest SSIM (0.855 ± 0.032). RF analysis showed that 76 out of 186 features were preserved post-translation, while the remainder were degraded or lost. Qualitative feedback revealed consistent issues with low-level feature preservation and artifact generation, particularly in lesion-rich regions. These evaluations were conducted to assess whether synthetic MRI retained clinically relevant patterns, supported expert interpretation, and improved diagnostic accuracy. Importantly, classification performance using synthetic MRI significantly exceeded that of US-based input, achieving average accuracy and AUC of ~ 0.93 ± 0.05. Although 2D-Pix2Pix showed the best overall performance in similarity and partial RF preservation, improvements are still required in lesion-level fidelity and artifact suppression. The combination of radiomics, qualitative, and classification analyses offered a holistic view of the current strengths and limitations of I2I models, supporting their potential in clinical applications pending further refinement and validation.

Yang Z, Jiang H, Shan S, Wang X, Kou Q, Wang C, Jin P, Xu Y, Liu X, Zhang Y, Zhang Y

pubmed logopapersJul 19 2025
To develop and validate a deep learning model based on arterial phase-enhanced CT for predicting the pathological grading of clear cell renal cell carcinoma (ccRCC). Data from 564 patients diagnosed with ccRCC from five distinct hospitals were retrospectively analyzed. Patients from centers 1 and 2 were randomly divided into a training set (n=283) and an internal test set (n=122). Patients from centers 3, 4, and 5 served as external validation sets 1 (n=60), 2 (n=38), and 3 (n=61), respectively. A 2D model, a 2.5D model (three-slice input), and a radiomics-based multi-layer perceptron (MLP) model were developed. Model performance was evaluated using the area under the curve (AUC), accuracy, and sensitivity. The 2.5D model outperformed the 2D and MLP models. Its AUCs were 0.959 (95% CI: 0.9438-0.9738) for the training set, 0.879 (95% CI: 0.8401-0.9180) for the internal test set, and 0.870 (95% CI: 0.8076-0.9334), 0.862 (95% CI: 0.7581-0.9658), and 0.849 (95% CI: 0.7766-0.9216) for the three external validation sets, respectively. The corresponding accuracy values were 0.895, 0.836, 0.827, 0.825, and 0.839. Compared to the MLP model, the 2.5D model achieved significantly higher AUCs (increases of 0.150 [p<0.05], 0.112 [p<0.05], and 0.088 [p<0.05]) and accuracies (increases of 0.077 [p<0.05], 0.075 [p<0.05], and 0.101 [p<0.05]) in the external validation sets. The 2.5D model based on 2.5D CT image input demonstrated improved predictive performance for the WHO/ISUP grading of ccRCC.

Joshi T, Virostko J, Petrov MS

pubmed logopapersJul 19 2025
High intra-pancreatic fat deposition (IPFD) plays an important role in diseases of the pancreas. The intricate anatomy of the pancreas and the surrounding structures has historically made IPFD quantification a challenging measurement to make accurately on radiological images. To take on the challenge, automated IPFD quantification methods using artificial intelligence (AI) have recently been deployed. The aim was to benchmark the current knowledge on the use of AI-based models to measure IPFD automatedly. The search was conducted in the MEDLINE, Embase, Scopus, and IEEE Xplore databases. Studies were eligible if they used AI for both segmentation of the pancreas and quantification of IPFD. The ground truth was manual segmentation by radiologists. When possible, data were pooled statistically using a random-effects model. A total of 12 studies (10 cross-sectional and 2 longitudinal) encompassing more than 50 thousand people were included. Eight of the 12 studies used MRI, whereas four studies employed CT. U-Net model and nnU-Net model were the most frequently used AI-based models. The pooled Dice similarity coefficient of AI-based models in quantifying IPFD was 82.3% (95% confidence interval, 73.5 to 91.1%). The clinical application of AI-based models showed the relevance of high IPFD to acute pancreatitis, pancreatic cancer, and type 2 diabetes mellitus. Current AI-based models for IPFD quantification are suboptimal, as the dissimilarity between AI-based and manual quantification of IPFD is not negligible. Future advancements in fully automated measurements of IPFD will accelerate the accumulation of robust, large-scale evidence on the role of high IPFD in pancreatic diseases. KEY POINTS: Question What is the current evidence on the performance and clinical applicability of artificial intelligence-based models for automated quantification of intra-pancreatic fat deposition? Findings The nnU-Net model achieved the highest Dice similarity coefficient among MRI-based studies, whereas the nnTransfer model demonstrated the highest Dice similarity coefficient in CT-based studies. Clinical relevance Standardisation of reporting on artificial intelligence-based models for the quantification of intra-pancreatic fat deposition will be essential to enhancing the clinical applicability and reliability of artificial intelligence in imaging patients with diseases of the pancreas.

Mehak Arora, Ayman Ali, Kaiyuan Wu, Carolyn Davis, Takashi Shimazui, Mahmoud Alwakeel, Victor Moas, Philip Yang, Annette Esper, Rishikesan Kamaleswaran

arxiv logopreprintJul 19 2025
In intensive care units (ICUs), patients with complex clinical conditions require vigilant monitoring and prompt interventions. Chest X-rays (CXRs) are a vital diagnostic tool, providing insights into clinical trajectories, but their irregular acquisition limits their utility. Existing tools for CXR interpretation are constrained by cross-sectional analysis, failing to capture temporal dynamics. To address this, we introduce CXR-TFT, a novel multi-modal framework that integrates temporally sparse CXR imaging and radiology reports with high-frequency clinical data, such as vital signs, laboratory values, and respiratory flow sheets, to predict the trajectory of CXR findings in critically ill patients. CXR-TFT leverages latent embeddings from a vision encoder that are temporally aligned with hourly clinical data through interpolation. A transformer model is then trained to predict CXR embeddings at each hour, conditioned on previous embeddings and clinical measurements. In a retrospective study of 20,000 ICU patients, CXR-TFT demonstrated high accuracy in forecasting abnormal CXR findings up to 12 hours before they became radiographically evident. This predictive capability in clinical data holds significant potential for enhancing the management of time-sensitive conditions like acute respiratory distress syndrome, where early intervention is crucial and diagnoses are often delayed. By providing distinctive temporal resolution in prognostic CXR analysis, CXR-TFT offers actionable 'whole patient' insights that can directly improve clinical outcomes.

Cassandra Tong Ye, Shamus Li, Tyler King, Kristina Monakhova

arxiv logopreprintJul 19 2025
Deep learning models often hallucinate, producing realistic artifacts that are not truly present in the sample. This can have dire consequences for scientific and medical inverse problems, such as MRI and microscopy denoising, where accuracy is more important than perceptual quality. Uncertainty quantification techniques, such as conformal prediction, can pinpoint outliers and provide guarantees for image regression tasks, improving reliability. However, existing methods utilize a linear constant scaling factor to calibrate uncertainty bounds, resulting in larger, less informative bounds. We propose QUTCC, a quantile uncertainty training and calibration technique that enables nonlinear, non-uniform scaling of quantile predictions to enable tighter uncertainty estimates. Using a U-Net architecture with a quantile embedding, QUTCC enables the prediction of the full conditional distribution of quantiles for the imaging task. During calibration, QUTCC generates uncertainty bounds by iteratively querying the network for upper and lower quantiles, progressively refining the bounds to obtain a tighter interval that captures the desired coverage. We evaluate our method on several denoising tasks as well as compressive MRI reconstruction. Our method successfully pinpoints hallucinations in image estimates and consistently achieves tighter uncertainty intervals than prior methods while maintaining the same statistical coverage.

Merjem Bećirović, Amina Kurtović, Nordin Smajlović, Medina Kapo, Amila Akagić

arxiv logopreprintJul 19 2025
Medical imaging plays a vital role in early disease diagnosis and monitoring. Specifically, blood microscopy offers valuable insights into blood cell morphology and the detection of hematological disorders. In recent years, deep learning-based automated classification systems have demonstrated high potential in enhancing the accuracy and efficiency of blood image analysis. However, a detailed performance analysis of specific deep learning frameworks appears to be lacking. This paper compares the performance of three popular deep learning frameworks, TensorFlow with Keras, PyTorch, and JAX, in classifying blood cell images from the publicly available BloodMNIST dataset. The study primarily focuses on inference time differences, but also classification performance for different image sizes. The results reveal variations in performance across frameworks, influenced by factors such as image resolution and framework-specific optimizations. Classification accuracy for JAX and PyTorch was comparable to current benchmarks, showcasing the efficiency of these frameworks for medical image classification.

Andrea Moschetto, Lemuel Puglisi, Alec Sargood, Pierluigi Dell'Acqua, Francesco Guarnera, Sebastiano Battiato, Daniele Ravì

arxiv logopreprintJul 19 2025
Magnetic Resonance Imaging (MRI) enables the acquisition of multiple image contrasts, such as T1-weighted (T1w) and T2-weighted (T2w) scans, each offering distinct diagnostic insights. However, acquiring all desired modalities increases scan time and cost, motivating research into computational methods for cross-modal synthesis. To address this, recent approaches aim to synthesize missing MRI contrasts from those already acquired, reducing acquisition time while preserving diagnostic quality. Image-to-image (I2I) translation provides a promising framework for this task. In this paper, we present a comprehensive benchmark of generative models$\unicode{x2013}$specifically, Generative Adversarial Networks (GANs), diffusion models, and flow matching (FM) techniques$\unicode{x2013}$for T1w-to-T2w 2D MRI I2I translation. All frameworks are implemented with comparable settings and evaluated on three publicly available MRI datasets of healthy adults. Our quantitative and qualitative analyses show that the GAN-based Pix2Pix model outperforms diffusion and FM-based methods in terms of structural fidelity, image quality, and computational efficiency. Consistent with existing literature, these results suggest that flow-based models are prone to overfitting on small datasets and simpler tasks, and may require more data to match or surpass GAN performance. These findings offer practical guidance for deploying I2I translation techniques in real-world MRI workflows and highlight promising directions for future research in cross-modal medical image synthesis. Code and models are publicly available at https://github.com/AndreaMoschetto/medical-I2I-benchmark.

Ren X, Li L

pubmed logopapersJul 19 2025
Magnetic resonance imaging (MRI) has become a pivotal non-invasive tool in the evaluation and management of lymphedema. This review systematically summarizes its current applications, highlighting imaging techniques, comparative advantages over other modalities, MRI-based staging systems, and emerging clinical roles. A comprehensive literature review was conducted, covering comparisons with lymphoscintigraphy, ultrasound, and computed tomography (CT), as well as studies on the feasibility of multiparametric MRI sequences. Compared to conventional imaging, MRI offers superior soft tissue contrast and enables detailed assessment of lymphatic anatomy, tissue composition, and fluid distribution through sequences such as T2-weighted imaging, diffusion-weighted imaging (DWI), and magnetic resonance lymphangiography (MRL). Standardized grading systems have been proposed to support clinical staging. MRI is increasingly applied in preoperative planning and postoperative surveillance.These findings underscore MRI's diagnostic precision and clinical utility. Future research should focus on protocol standardization, incorporation of quantitative biomarkers, and development of AI-driven tools to enable personalized, scalable lymphedema care.
Page 356 of 6646636 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.