Sort by:
Page 83 of 3343334 results

Unpaired T1-weighted MRI synthesis from T2-weighted data using unsupervised learning.

Zhao J, Zeng N, Zhao L, Li N

pubmed logopapersJul 27 2025
Magnetic Resonance Imaging (MRI) is indispensable for modern diagnostics because of its detailed anatomical and functional information without the use of ionizing radiation. However, acquiring multiple imaging sequences - such as T1-weighted (T1w) and T2-weighted (T2w) scans - can prolong scan times, increase patient discomfort, and raise healthcare costs. In this study, we propose an unsupervised framework based on a contrast-sensitive domain translation network with adaptive feature normalization to translate unpaired T2w MRI images into clinically acceptable T1w images. Our method employs adversarial training, along with cycle consistency, identity, and attention-guided loss functions. These components ensure that the generated images not only preserve essential anatomical details but also exhibit high visual fidelity compared to ground truth T1w images. Quantitative evaluation on a publicly available MRI dataset yielded a mean Peak Signal-to-Noise Ratio (PSNR) of 22.403 dB, a mean Structural Similarity Index (SSIM) of 0.775, Root Mean Squared Error (RMSE) of 0.078, and Mean Absolute Error (MAE) of 0.036. Additional analysis of pixel intensity and grayscale distributions further supported the consistency between the generated and ground truth images. Qualitative assessment included visual comparison to assess perceptual fidelity. These promising results suggest that a contrast-sensitive domain translation network with an adaptive feature normalization framework can effectively generate realistic T1w images from T2w inputs, potentially reducing the need for acquiring multiple sequences and thereby streamlining MRI protocols.

Quantification of hepatic steatosis on post-contrast computed tomography scans using artificial intelligence tools.

Derstine BA, Holcombe SA, Chen VL, Pai MP, Sullivan JA, Wang SC, Su GL

pubmed logopapersJul 26 2025
Early detection of steatotic liver disease (SLD) is critically important. In clinical practice, hepatic steatosis is frequently diagnosed using computed tomography (CT) performed for unrelated clinical indications. An equation for estimating magnetic resonance proton density fat fraction (MR-PDFF) using liver attenuation on non-contrast CT exists, but no equivalent equation exists for post-contrast CT. We sought to (1) determine whether an automated workflow can accurately measure liver attenuation, (2) validate previously identified optimal thresholds for liver or liver-spleen attenuation in post-contrast studies, and (3) develop a method for estimating MR-PDFF (FF) on post-contrast CT. The fully automated TotalSegmentator 'total' machine learning model was used to segment 3D liver and spleen from non-contrast and post-contrast CT scans. Mean attenuation was extracted from liver (L) and spleen (S) volumes and from manually placed regions of interest (ROIs) in multi-phase CT scans of two cohorts: derivation (n = 1740) and external validation (n = 1044). Non-linear regression was used to determine the optimal coefficients for three phase-specific (arterial, venous, delayed) increasing exponential decay equations relating post-contrast L to non-contrast L. MR-PDFF was estimated from non-contrast CT and used as the reference standard. The mean attenuation for manual ROIs versus automated volumes were nearly perfectly correlated for both liver and spleen (r > .96, p < .001). For moderate-to-severe steatosis (L < 40 HU), the density of the liver (L) alone was a better classifier than either liver-spleen difference (L-S) or ratio (L/S) on post-contrast CTs. Fat fraction calculated using a corrected post-contrast liver attenuation measure agreed with non-contrast FF > 15% in both the derivation and external validation cohort, with AUROC between 0.92 and 0.97 on arterial, venous, and delayed phases. Automated volumetric mean attenuation of liver and spleen can be used instead of manually placed ROIs for liver fat assessments. Liver attenuation alone in post-contrast phases can be used to assess the presence of moderate-to-severe hepatic steatosis. Correction equations for liver attenuation on post-contrast phase CT scans enable reasonable quantification of liver steatosis, providing potential opportunities for utilizing clinical scans to develop large scale screening or studies in SLD.

AI-driven preclinical disease risk assessment using imaging in UK biobank.

Seletkov D, Starck S, Mueller TT, Zhang Y, Steinhelfer L, Rueckert D, Braren R

pubmed logopapersJul 26 2025
Identifying disease risk and detecting disease before clinical symptoms appear are essential for early intervention and improving patient outcomes. In this context, the integration of medical imaging in a clinical workflow offers a unique advantage by capturing detailed structural and functional information. Unlike non-image data, such as lifestyle, sociodemographic, or prior medical conditions, which often rely on self-reported information susceptible to recall biases and subjective perceptions, imaging offers more objective and reliable insights. Although the use of medical imaging in artificial intelligence (AI)-driven risk assessment is growing, its full potential remains underutilized. In this work, we demonstrate how imaging can be integrated into routine screening workflows, in particular by taking advantage of neck-to-knee whole-body magnetic resonance imaging (MRI) data available in the large prospective study UK Biobank. Our analysis focuses on three-year risk assessment for a broad spectrum of diseases, including cardiovascular, digestive, metabolic, inflammatory, degenerative, and oncologic conditions. We evaluate AI-based pipelines for processing whole-body MRI and demonstrate that using image-derived radiomics features provides the best prediction performance, interpretability, and integration capability with non-image data.

Contextual structured annotations on PACS: a futuristic vision for reporting routine oncologic imaging studies and its potential to transform clinical work and research.

Wong VK, Wang MX, Bethi E, Nagarakanti S, Morani AC, Marcal LP, Rauch GM, Brown JJ, Yedururi S

pubmed logopapersJul 26 2025
Radiologists currently have very limited and time-consuming options to annotate findings on the images and are mostly limited to arrows, calipers and lines to annotate any type of findings on most PACS systems. We propose a framework placing encoded, transferable, highly contextual structured text annotations directly on PACS images indicating the type of lesion, level of suspicion, location, lesion measurement, and TNM status for malignant lesions, along with automated integration of this information into the radiology report. This approach offers a one-stop solution to generate radiology reports that are easily understood by other radiologists, patient care providers, patients, and machines while reducing the effort needed to dictate a detailed radiology report and minimizing speech recognition errors. It also provides a framework for automated generation of large volume high quality annotated data sets for machine learning algorithms from daily work of radiologists. Enabling voice dictation of these contextual annotations directly into PACS similar to voice enabled Google search will further enhance the user experience. Wider adaptation of contextualized structured annotations in the future can facilitate studies understanding the temporal evolution of different tumor lesions across multiple lines of treatment and early detection of asynchronous response/areas of treatment failure. We present a futuristic vision, and solution with the potential to transform clinical work and research in oncologic imaging.

A novel hybrid deep learning approach combining deep feature attention and statistical validation for enhanced thyroid ultrasound segmentation.

Banerjee T, Singh DP, Swain D, Mahajan S, Kadry S, Kim J

pubmed logopapersJul 26 2025
An effective diagnosis system and suitable treatment planning require the precise segmentation of thyroid nodules in ultrasound imaging. The advancement of imaging technologies has not resolved traditional imaging challenges, which include noise issues, limited contrast, and dependency on operator choices, thus highlighting the need for automated, reliable solutions. The researchers developed TATHA, an innovative deep learning architecture dedicated to improving thyroid ultrasound image segmentation accuracy. The model is evaluated using the digital database of thyroid ultrasound images, which includes 99 cases across three subsets containing 134 labelled images for training, validation, and testing. It incorporates data pre-treatment procedures that reduce speckle noise and enhance contrast, while edge detection provides high-quality input for segmentation. TATHA outperforms U-Net, PSPNet, and Vision Transformers across various datasets and cross-validation folds, achieving superior Dice scores, accuracy, and AUC results. The distributed thyroid segmentation framework generates reliable predictions by combining results from multiple feature extraction units. The findings confirm that these advancements make TATHA an essential tool for clinicians and researchers in thyroid imaging and clinical applications.

A triple pronged approach for ulcerative colitis severity classification using multimodal, meta, and transformer based learning.

Ahmed MN, Neogi D, Kabir MR, Rahman S, Momen S, Mohammed N

pubmed logopapersJul 26 2025
Ulcerative colitis (UC) is a chronic inflammatory disorder necessitating precise severity stratification to facilitate optimal therapeutic interventions. This study harnesses a triple-pronged deep learning methodology-including multimodal inference pipelines that eliminate domain-specific training, few-shot meta-learning, and Vision Transformer (ViT)-based ensembling-to classify UC severity within the HyperKvasir dataset. We systematically evaluate multiple vision transformer architectures, discovering that a Swin-Base model achieves an accuracy of 90%, while a soft-voting ensemble of diverse ViT backbones boosts performance to 93%. In parallel, we leverage multimodal pre-trained frameworks (e.g., CLIP, BLIP, FLAVA) integrated with conventional machine learning algorithms, yielding an accuracy of 83%. To address limited annotated data, we deploy few-shot meta-learning approaches (e.g., Matching Networks), attaining 83% accuracy in a 5-shot context. Furthermore, interpretability is enhanced via SHapley Additive exPlanations (SHAP), which interpret both local and global model behaviors, thereby fostering clinical trust in the model's inferences. These findings underscore the potential of contemporary representation learning and ensemble strategies for robust UC severity classification, highlighting the pivotal role of model transparency in facilitating medical image analysis.

Deep Diffusion MRI Template (DDTemplate): A Novel Deep Learning Groupwise Diffusion MRI Registration Method for Brain Template Creation.

Wang J, Zhu X, Zhang W, Du M, Wells WM, O'Donnell LJ, Zhang F

pubmed logopapersJul 26 2025
Diffusion MRI (dMRI) is an advanced imaging technique that enables in-vivo tracking of white matter fiber tracts and estimates the underlying cellular microstructure of brain tissues. Groupwise registration of dMRI data from multiple individuals is an important task for brain template creation and investigation of inter-subject brain variability. However, groupwise registration is a challenging task due to the uniqueness of dMRI data that include multi-dimensional, orientation-dependent signals that describe not only the strength but also the orientation of water diffusion in brain tissues. Deep learning approaches have shown successful performance in standard subject-to-subject dMRI registration. However, no deep learning methods have yet been proposed for groupwise dMRI registration. . In this work, we propose Deep Diffusion MRI Template (DDTemplate), which is a novel deep-learning-based method building upon the popular VoxelMorph framework to take into account dMRI fiber tract information. DDTemplate enables joint usage of whole-brain tissue microstructure and tract-specific fiber orientation information to ensure alignment of white matter fiber tracts and whole brain anatomical structures. We propose a novel deep learning framework that simultaneously trains a groupwise dMRI registration network and generates a population brain template. During inference, the trained model can be applied to register unseen subjects to the learned template. We compare DDTemplate with several state-of-the-art registration methods and demonstrate superior performance on dMRI data from multiple cohorts (adolescents, young adults, and elderly adults) acquired from different scanners. Furthermore, as a testbed task, we perform a between-population analysis to investigate sex differences in the brain, using the popular Tract-Based Spatial Statistics (TBSS) method that relies on groupwise dMRI registration. We find that using DDTemplate can increase the sensitivity in population difference detection, showing the potential of our method's utility in real neuroscientific applications.

Optimization of deep learning models for inference in low resource environments.

Thakur S, Pati S, Wu J, Panchumarthy R, Karkada D, Kozlov A, Shamporov V, Suslov A, Lyakhov D, Proshin M, Shah P, Makris D, Bakas S

pubmed logopapersJul 26 2025
Artificial Intelligence (AI), and particularly deep learning (DL), has shown great promise to revolutionize healthcare. However, clinical translation is often hindered by demanding hardware requirements. In this study, we assess the effectiveness of optimization techniques for DL models in healthcare applications, targeting varying AI workloads across the domains of radiology, histopathology, and medical RGB imaging, while evaluating across hardware configurations. The assessed AI workloads focus on both segmentation and classification workloads, by virtue of brain extraction in Magnetic Resonance Imaging (MRI), colorectal cancer delineation in Hematoxylin & Eosin (H&E) stained digitized tissue sections, and diabetic foot ulcer classification in RGB images. We quantitatively evaluate model performance in terms of model runtime during inference (including speedup, latency, and memory usage) and model utility on unseen data. Our results demonstrate that optimization techniques can substantially improve model runtime, without compromising model utility. These findings suggest that optimization techniques can facilitate the clinical translation of AI models in low-resource environments, making them more practical for real-world healthcare applications even in underserved regions.

CLT-MambaSeg: An integrated model of Convolution, Linear Transformer and Multiscale Mamba for medical image segmentation.

Uppal D, Prakash S

pubmed logopapersJul 26 2025
Recent advances in deep learning have significantly enhanced the performance of medical image segmentation. However, maintaining a balanced integration of feature localization, global context modeling, and computational efficiency remains a critical research challenge. Convolutional Neural Networks (CNNs) effectively capture fine-grained local features through hierarchical convolutions; however, they often struggle to model long-range dependencies due to their limited receptive field. Transformers address this limitation by leveraging self-attention mechanisms to capture global context, but they are computationally intensive and require large-scale data for effective training. The Mamba architecture has emerged as a promising approach, effectively capturing long-range dependencies while maintaining low computational overhead and high segmentation accuracy. Based on this, we propose a method named CLT-MambaSeg that integrates Convolution, Linear Transformer, and Multiscale Mamba architectures to capture local features, model global context, and improve computational efficiency for medical image segmentation. It utilizes a convolution-based Spatial Representation Extraction (SREx) module to capture intricate spatial relationships and dependencies. Further, it comprises a Mamba Vision Linear Transformer (MVLTrans) module to capture multiscale context, spatial and sequential dependencies, and enhanced global context. In addition, to address the problem of limited data, we propose a novel Memory-Guided Augmentation Generative Adversarial Network (MeGA-GAN) that generates synthetic realistic images to further enhance the segmentation performance. We conduct extensive experiments and ablation studies on the five benchmark datasets, namely CVC-ClinicDB, Breast UltraSound Images (BUSI), PH2, and two datasets from the International Skin Imaging Collaboration (ISIC), namely ISIC-2016 and ISIC-2017. Experimental results demonstrate the efficacy of the proposed CLT-MambaSeg compared to other state-of-the-art methods.

KC-UNIT: Multi-kernel conversion using unpaired image-to-image translation with perceptual guidance in chest computed tomography imaging.

Choi C, Kim D, Park S, Lee H, Kim H, Lee SM, Kim N

pubmed logopapersJul 26 2025
Computed tomography (CT) images are reconstructed from raw datasets including sinogram using various convolution kernels through back projection. Kernels are typically chosen depending on the anatomical structure being imaged and the specific purpose of the scan, balancing the trade-off between image sharpness and pixel noise. Generally, a sinogram requires large storage capacity, and storage space is often limited in clinical settings. Thus, CT images are generally reconstructed with only one specific kernel in clinical settings, and the sinogram is typically discarded after a week. Therefore, many researchers have proposed deep learning-based image-to-image translation methods for CT kernel conversion. However, transferring the style of the target kernel while preserving anatomical structure remains challenging, particularly when translating CT images from a source domain to a target domain in an unpaired manner, which is often encountered in real-world settings. Thus, we propose a novel kernel conversion method using unpaired image-to-image translation (KC-UNIT). This approach utilizes discriminator regularization, using feature maps from the generator to improve semantic representation learning. To capture content and style features, cosine similarity content and contrastive style losses were defined between the feature map of generator and semantic label map of discriminator. This can be easily incorporated by modifying the discriminator's architecture without requiring any additional learnable or pre-trained networks. The KC-UNIT demonstrated the ability to preserve fine-grained anatomical structure from the source domain during transfer. Our method outperformed existing generative adversarial network-based methods across most kernel conversion methods in three kernel domains. The code is available at https://github.com/cychoi97/KC-UNIT.
Page 83 of 3343334 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.