Sort by:
Page 91 of 3993982 results

Artificial intelligence-based models for quantification of intra-pancreatic fat deposition and their clinical relevance: a systematic review of imaging studies.

Joshi T, Virostko J, Petrov MS

pubmed logopapersJul 19 2025
High intra-pancreatic fat deposition (IPFD) plays an important role in diseases of the pancreas. The intricate anatomy of the pancreas and the surrounding structures has historically made IPFD quantification a challenging measurement to make accurately on radiological images. To take on the challenge, automated IPFD quantification methods using artificial intelligence (AI) have recently been deployed. The aim was to benchmark the current knowledge on the use of AI-based models to measure IPFD automatedly. The search was conducted in the MEDLINE, Embase, Scopus, and IEEE Xplore databases. Studies were eligible if they used AI for both segmentation of the pancreas and quantification of IPFD. The ground truth was manual segmentation by radiologists. When possible, data were pooled statistically using a random-effects model. A total of 12 studies (10 cross-sectional and 2 longitudinal) encompassing more than 50 thousand people were included. Eight of the 12 studies used MRI, whereas four studies employed CT. U-Net model and nnU-Net model were the most frequently used AI-based models. The pooled Dice similarity coefficient of AI-based models in quantifying IPFD was 82.3% (95% confidence interval, 73.5 to 91.1%). The clinical application of AI-based models showed the relevance of high IPFD to acute pancreatitis, pancreatic cancer, and type 2 diabetes mellitus. Current AI-based models for IPFD quantification are suboptimal, as the dissimilarity between AI-based and manual quantification of IPFD is not negligible. Future advancements in fully automated measurements of IPFD will accelerate the accumulation of robust, large-scale evidence on the role of high IPFD in pancreatic diseases. KEY POINTS: Question What is the current evidence on the performance and clinical applicability of artificial intelligence-based models for automated quantification of intra-pancreatic fat deposition? Findings The nnU-Net model achieved the highest Dice similarity coefficient among MRI-based studies, whereas the nnTransfer model demonstrated the highest Dice similarity coefficient in CT-based studies. Clinical relevance Standardisation of reporting on artificial intelligence-based models for the quantification of intra-pancreatic fat deposition will be essential to enhancing the clinical applicability and reliability of artificial intelligence in imaging patients with diseases of the pancreas.

CXR-TFT: Multi-Modal Temporal Fusion Transformer for Predicting Chest X-ray Trajectories

Mehak Arora, Ayman Ali, Kaiyuan Wu, Carolyn Davis, Takashi Shimazui, Mahmoud Alwakeel, Victor Moas, Philip Yang, Annette Esper, Rishikesan Kamaleswaran

arxiv logopreprintJul 19 2025
In intensive care units (ICUs), patients with complex clinical conditions require vigilant monitoring and prompt interventions. Chest X-rays (CXRs) are a vital diagnostic tool, providing insights into clinical trajectories, but their irregular acquisition limits their utility. Existing tools for CXR interpretation are constrained by cross-sectional analysis, failing to capture temporal dynamics. To address this, we introduce CXR-TFT, a novel multi-modal framework that integrates temporally sparse CXR imaging and radiology reports with high-frequency clinical data, such as vital signs, laboratory values, and respiratory flow sheets, to predict the trajectory of CXR findings in critically ill patients. CXR-TFT leverages latent embeddings from a vision encoder that are temporally aligned with hourly clinical data through interpolation. A transformer model is then trained to predict CXR embeddings at each hour, conditioned on previous embeddings and clinical measurements. In a retrospective study of 20,000 ICU patients, CXR-TFT demonstrated high accuracy in forecasting abnormal CXR findings up to 12 hours before they became radiographically evident. This predictive capability in clinical data holds significant potential for enhancing the management of time-sensitive conditions like acute respiratory distress syndrome, where early intervention is crucial and diagnoses are often delayed. By providing distinctive temporal resolution in prognostic CXR analysis, CXR-TFT offers actionable 'whole patient' insights that can directly improve clinical outcomes.

QUTCC: Quantile Uncertainty Training and Conformal Calibration for Imaging Inverse Problems

Cassandra Tong Ye, Shamus Li, Tyler King, Kristina Monakhova

arxiv logopreprintJul 19 2025
Deep learning models often hallucinate, producing realistic artifacts that are not truly present in the sample. This can have dire consequences for scientific and medical inverse problems, such as MRI and microscopy denoising, where accuracy is more important than perceptual quality. Uncertainty quantification techniques, such as conformal prediction, can pinpoint outliers and provide guarantees for image regression tasks, improving reliability. However, existing methods utilize a linear constant scaling factor to calibrate uncertainty bounds, resulting in larger, less informative bounds. We propose QUTCC, a quantile uncertainty training and calibration technique that enables nonlinear, non-uniform scaling of quantile predictions to enable tighter uncertainty estimates. Using a U-Net architecture with a quantile embedding, QUTCC enables the prediction of the full conditional distribution of quantiles for the imaging task. During calibration, QUTCC generates uncertainty bounds by iteratively querying the network for upper and lower quantiles, progressively refining the bounds to obtain a tighter interval that captures the desired coverage. We evaluate our method on several denoising tasks as well as compressive MRI reconstruction. Our method successfully pinpoints hallucinations in image estimates and consistently achieves tighter uncertainty intervals than prior methods while maintaining the same statistical coverage.

Performance comparison of medical image classification systems using TensorFlow Keras, PyTorch, and JAX

Merjem Bećirović, Amina Kurtović, Nordin Smajlović, Medina Kapo, Amila Akagić

arxiv logopreprintJul 19 2025
Medical imaging plays a vital role in early disease diagnosis and monitoring. Specifically, blood microscopy offers valuable insights into blood cell morphology and the detection of hematological disorders. In recent years, deep learning-based automated classification systems have demonstrated high potential in enhancing the accuracy and efficiency of blood image analysis. However, a detailed performance analysis of specific deep learning frameworks appears to be lacking. This paper compares the performance of three popular deep learning frameworks, TensorFlow with Keras, PyTorch, and JAX, in classifying blood cell images from the publicly available BloodMNIST dataset. The study primarily focuses on inference time differences, but also classification performance for different image sizes. The results reveal variations in performance across frameworks, influenced by factors such as image resolution and framework-specific optimizations. Classification accuracy for JAX and PyTorch was comparable to current benchmarks, showcasing the efficiency of these frameworks for medical image classification.

Benchmarking GANs, Diffusion Models, and Flow Matching for T1w-to-T2w MRI Translation

Andrea Moschetto, Lemuel Puglisi, Alec Sargood, Pierluigi Dell'Acqua, Francesco Guarnera, Sebastiano Battiato, Daniele Ravì

arxiv logopreprintJul 19 2025
Magnetic Resonance Imaging (MRI) enables the acquisition of multiple image contrasts, such as T1-weighted (T1w) and T2-weighted (T2w) scans, each offering distinct diagnostic insights. However, acquiring all desired modalities increases scan time and cost, motivating research into computational methods for cross-modal synthesis. To address this, recent approaches aim to synthesize missing MRI contrasts from those already acquired, reducing acquisition time while preserving diagnostic quality. Image-to-image (I2I) translation provides a promising framework for this task. In this paper, we present a comprehensive benchmark of generative models$\unicode{x2013}$specifically, Generative Adversarial Networks (GANs), diffusion models, and flow matching (FM) techniques$\unicode{x2013}$for T1w-to-T2w 2D MRI I2I translation. All frameworks are implemented with comparable settings and evaluated on three publicly available MRI datasets of healthy adults. Our quantitative and qualitative analyses show that the GAN-based Pix2Pix model outperforms diffusion and FM-based methods in terms of structural fidelity, image quality, and computational efficiency. Consistent with existing literature, these results suggest that flow-based models are prone to overfitting on small datasets and simpler tasks, and may require more data to match or surpass GAN performance. These findings offer practical guidance for deploying I2I translation techniques in real-world MRI workflows and highlight promising directions for future research in cross-modal medical image synthesis. Code and models are publicly available at https://github.com/AndreaMoschetto/medical-I2I-benchmark.

Magnetic resonance imaging in lymphedema: Opportunities, challenges, and future perspectives.

Ren X, Li L

pubmed logopapersJul 19 2025
Magnetic resonance imaging (MRI) has become a pivotal non-invasive tool in the evaluation and management of lymphedema. This review systematically summarizes its current applications, highlighting imaging techniques, comparative advantages over other modalities, MRI-based staging systems, and emerging clinical roles. A comprehensive literature review was conducted, covering comparisons with lymphoscintigraphy, ultrasound, and computed tomography (CT), as well as studies on the feasibility of multiparametric MRI sequences. Compared to conventional imaging, MRI offers superior soft tissue contrast and enables detailed assessment of lymphatic anatomy, tissue composition, and fluid distribution through sequences such as T2-weighted imaging, diffusion-weighted imaging (DWI), and magnetic resonance lymphangiography (MRL). Standardized grading systems have been proposed to support clinical staging. MRI is increasingly applied in preoperative planning and postoperative surveillance.These findings underscore MRI's diagnostic precision and clinical utility. Future research should focus on protocol standardization, incorporation of quantitative biomarkers, and development of AI-driven tools to enable personalized, scalable lymphedema care.

Explainable CT-based deep learning model for predicting hematoma expansion including intraventricular hemorrhage growth.

Zhao X, Zhang Z, Shui J, Xu H, Yang Y, Zhu L, Chen L, Chang S, Du C, Yao Z, Fang X, Shi L

pubmed logopapersJul 18 2025
Hematoma expansion (HE), including intraventricular hemorrhage (IVH) growth, significantly affects outcomes in patients with intracerebral hemorrhage (ICH). This study aimed to develop, validate, and interpret a deep learning model, HENet, for predicting three definitions of HE. Using CT scans and clinical data from 718 ICH patients across three hospitals, the multicenter retrospective study focused on revised hematoma expansion (RHE) definitions 1 and 2, and conventional HE (CHE). HENet's performance was compared with 2D models and physician predictions using two external validation sets. Results showed that HENet achieved high AUC values for RHE1, RHE2, and CHE predictions, surpassing physicians' predictions and 2D models in net reclassification index and integrated discrimination index for RHE1 and RHE2 outcomes. The Grad-CAM technique provided visual insights into the model's decision-making process. These findings suggest that integrating HENet into clinical practice could improve prediction accuracy and patient outcomes in ICH cases.

Performance of Machine Learning in Diagnosing KRAS (Kirsten Rat Sarcoma) Mutations in Colorectal Cancer: Systematic Review and Meta-Analysis.

Chen K, Qu Y, Han Y, Li Y, Gao H, Zheng D

pubmed logopapersJul 18 2025
With the widespread application of machine learning (ML) in the diagnosis and treatment of colorectal cancer (CRC), some studies have investigated the use of ML techniques for the diagnosis of KRAS (Kirsten rat sarcoma) mutation. Nevertheless, there is scarce evidence from evidence-based medicine to substantiate its efficacy. Our study was carried out to systematically review the performance of ML models developed using different modeling approaches, in diagnosing KRAS mutations in CRC. We aim to offer evidence-based foundations for the development and enhancement of future intelligent diagnostic tools. PubMed, Cochrane Library, Embase, and Web of Science were systematically retrieved, with the search cutoff date set to December 22, 2024. The encompassed studies are publicly published research papers that use ML to diagnose KRAS gene mutations in CRC. The risk of bias in the encompassed models was evaluated via the PROBAST (Prediction Model Risk of Bias Assessment Tool). A meta-analysis of the model's concordance index (c-index) was performed, and a bivariate mixed-effects model was used to summarize sensitivity and specificity based on diagnostic contingency tables. A total of 43 studies involving 10,888 patients were included. The modeling variables were derived from clinical characteristics, computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography/computed tomography, and pathological histology. In the validation cohort, for the ML model developed based on CT radiomic features, the c-index, sensitivity, and specificity were 0.87 (95% CI 0.84-0.90), 0.85 (95% CI 0.80-0.89), and 0.83 (95% CI 0.73-0.89), respectively. For the model developed using MRI radiomic features, the c-index, sensitivity, and specificity were 0.77 (95% CI 0.71-0.83), 0.78 (95% CI 0.72-0.83), and 0.73 (95% CI 0.63-0.81), respectively. For the ML model developed based on positron emission tomography/computed tomography radiomic features, the c-index, sensitivity, and specificity were 0.84 (95% CI 0.77-0.90), 0.73, and 0.83, respectively. Notably, the deep learning (DL) model based on pathological images demonstrated a c-index, sensitivity, and specificity of 0.96 (95% CI 0.94-0.98), 0.83 (95% CI 0.72-0.91), and 0.87 (95% CI 0.77-0.92), respectively. The DL model MRI-based model showed a c-index of 0.93 (95% CI 0.90-0.96), sensitivity of 0.85 (95% CI 0.75-0.91), and specificity of 0.83 (95% CI 0.77-0.88). ML is highly accurate in diagnosing KRAS mutations in CRC, and DL models based on MRI and pathological images exhibit particularly strong diagnosis accuracy. More broadly applicable DL-based diagnostic tools may be developed in the future. However, the clinical application of DL models remains relatively limited at present. Therefore, future research should focus on increasing sample sizes, improving model architectures, and developing more advanced DL models to facilitate the creation of highly efficient intelligent diagnostic tools for KRAS mutation diagnosis in CRC.

SegMamba-V2: Long-range Sequential Modeling Mamba For General 3D Medical Image Segmentation.

Xing Z, Ye T, Yang Y, Cai D, Gai B, Wu XJ, Gao F, Zhu L

pubmed logopapersJul 18 2025
The Transformer architecture has demonstrated remarkable results in 3D medical image segmentation due to its capability of modeling global relationships. However, it poses a significant computational burden when processing high-dimensional medical images. Mamba, as a State Space Model (SSM), has recently emerged as a notable approach for modeling long-range dependencies in sequential data. Although a substantial amount of Mamba-based research has focused on natural language and 2D image processing, few studies explore the capability of Mamba on 3D medical images. In this paper, we propose SegMamba-V2, a novel 3D medical image segmentation model, to effectively capture long-range dependencies within whole-volume features at each scale. To achieve this goal, we first devise a hierarchical scale downsampling strategy to enhance the receptive field and mitigate information loss during downsampling. Furthermore, we design a novel tri-orientated spatial Mamba block that extends the global dependency modeling process from one plane to three orthogonal planes to improve feature representation capability. Moreover, we collect and annotate a large-scale dataset (named CRC-2000) with fine-grained categories to facilitate benchmarking evaluation in 3D colorectal cancer (CRC) segmentation. We evaluate the effectiveness of our SegMamba-V2 on CRC-2000 and three other large-scale 3D medical image segmentation datasets, covering various modalities, organs, and segmentation targets. Experimental results demonstrate that our Segmamba-V2 outperforms state-of-the-art methods by a significant margin, which indicates the universality and effectiveness of the proposed model on 3D medical image segmentation tasks. The code for SegMamba-V2 is publicly available at: https://github.com/ge-xing/SegMamba-V2.

AI Prognostication in Nonsmall Cell Lung Cancer: A Systematic Review.

Augustin M, Lyons K, Kim H, Kim DG, Kim Y

pubmed logopapersJul 18 2025
The systematic literature review was performed on the use of artificial intelligence (AI) algorithms in nonsmall cell lung cancer (NSCLC) prognostication. Studies were evaluated for the type of input data (histology and whether CT, PET, and MRI were used), cancer therapy intervention, prognosis performance, and comparisons to clinical prognosis systems such as TNM staging. Further comparisons were drawn between different types of AI, such as machine learning (ML) and deep learning (DL). Syntheses of therapeutic interventions and algorithm input modalities were performed for comparison purposes. The review adheres to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). The initial database identified 3880 results, which were reduced to 513 after the automatic screening, and 309 after the exclusion criteria. The prognostic performance of AI for NSCLC has been investigated using histology and genetic data, and CT, PET, and MR imaging for surgery, immunotherapy, and radiation therapy patients with and without chemotherapy. Studies per therapy intervention were 13 for immunotherapy, 10 for radiotherapy, 14 for surgery, and 34 for other, multiple, or no specific therapy. The results of this systematic review demonstrate that AI-based prognostication methods consistently present higher prognostic performance for NSCLC, especially when directly compared with traditional prognostication techniques such as TNM staging. The use of DL outperforms ML-based prognostication techniques. DL-based prognostication demonstrates the potential for personalized precision cancer therapy as a supplementary decision-making tool. Before it is fully utilized in clinical practice, it is recommended that it be thoroughly validated through well-designed clinical trials.
Page 91 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.