Sort by:
Page 44 of 79781 results

A lung structure and function information-guided residual diffusion model for predicting idiopathic pulmonary fibrosis progression.

Jiang C, Xing X, Nan Y, Fang Y, Zhang S, Walsh S, Yang G, Shen D

pubmed logopapersJul 1 2025
Idiopathic Pulmonary Fibrosis (IPF) is a progressive lung disease that continuously scars and thickens lung tissue, leading to respiratory difficulties. Timely assessment of IPF progression is essential for developing treatment plans and improving patient survival rates. However, current clinical standards require multiple (usually two) CT scans at certain intervals to assess disease progression. This presents a dilemma: the disease progression is identified only after the disease has already progressed. To address this issue, a feasible solution is to generate the follow-up CT image from the patient's initial CT image to achieve early prediction of IPF. To this end, we propose a lung structure and function information-guided residual diffusion model. The key components of our model include (1) using a 2.5D generation strategy to reduce computational cost of generating 3D images with the diffusion model; (2) designing structural attention to mitigate negative impact of spatial misalignment between the two CT images on generation performance; (3) employing residual diffusion to accelerate model training and inference while focusing more on differences between the two CT images (i.e., the lesion areas); and (4) developing a CLIP-based text extraction module to extract lung function test information and further using such extracted information to guide the generation. Extensive experiments demonstrate that our method can effectively predict IPF progression and achieve superior generation performance compared to state-of-the-art methods.

How I Do It: Three-Dimensional MR Neurography and Zero Echo Time MRI for Rendering of Peripheral Nerve and Bone.

Lin Y, Tan ET, Campbell G, Breighner RE, Fung M, Wolfe SW, Carrino JA, Sneag DB

pubmed logopapersJul 1 2025
MR neurography sequences provide excellent nerve-to-background soft tissue contrast, whereas a zero echo time (ZTE) MRI sequence provides cortical bone contrast. By demonstrating the spatial relationship between nerves and bones, a combination of rendered three-dimensional (3D) MR neurography and ZTE sequences provides a roadmap for clinical decision-making, particularly for surgical intervention. In this article, the authors describe the method for fused rendering of peripheral nerve and bone by combining nerve and bone structures from 3D MR neurography and 3D ZTE MRI, respectively. The described method includes scanning acquisition, postprocessing that entails deep learning-based reconstruction techniques, and rendering techniques. Representative case examples demonstrate the steps and clinical use of these techniques. Challenges in nerve and bone rendering are also discussed.

Radiomics for lung cancer diagnosis, management, and future prospects.

Boubnovski Martell M, Linton-Reid K, Chen M, Aboagye EO

pubmed logopapersJul 1 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, with its early detection and effective treatment posing significant clinical challenges. Radiomics, the extraction of quantitative features from medical imaging, has emerged as a promising approach for enhancing diagnostic accuracy, predicting treatment responses, and personalising patient care. This review explores the role of radiomics in lung cancer diagnosis and management, with methods ranging from handcrafted radiomics to deep learning techniques that can capture biological intricacies. The key applications are highlighted across various stages of lung cancer care, including nodule detection, histology prediction, and disease staging, where artificial intelligence (AI) models demonstrate superior specificity and sensitivity. The article also examines future directions, emphasising the integration of large language models, explainable AI (XAI), and super-resolution imaging techniques as transformative developments. By merging diverse data sources and incorporating interpretability into AI models, radiomics stands poised to redefine clinical workflows, offering more robust and reliable tools for lung cancer diagnosis, treatment planning, and outcome prediction. These advancements underscore radiomics' potential in supporting precision oncology and improving patient outcomes through data-driven insights.

Impact of CT reconstruction algorithms on pericoronary and epicardial adipose tissue attenuation.

Xiao H, Wang X, Yang P, Wang L, Xi J, Xu J

pubmed logopapersJul 1 2025
This study aims to investigate the impact of adaptive statistical iterative reconstruction-Veo (ASIR-V) and deep learning image reconstruction (DLIR) algorithms on the quantification of pericoronary adipose tissue (PCAT) and epicardial adipose tissue (EAT). Furthermore, we propose to explore the feasibility of correcting the effects through fat threshold adjustment. A retrospective analysis was conducted on the imaging data of 134 patients who underwent coronary CT angiography (CCTA) between December 2023 and January 2024. These data were reconstructed into seven datasets using filtered back projection (FBP), ASIR-V at three different intensities (ASIR-V 30%, ASIR-V 50%, ASIR-V 70%), and DLIR at three different intensities (DLIR-L, DLIR-M, DLIR-H). Repeated-measures ANOVA was used to compare differences in fat, PCAT and EAT attenuation values among the reconstruction algorithms, and Bland-Altman plots were used to analyze the agreement between ASIR-V or DLIR and FBP algorithms in PCAT attenuation values. Compared to FBP, ASIR-V 30 %, ASIR-V 50 %, ASIR-V 70 %, DLIR-L, DLIR-M, and DLIR-H significantly increased fat attenuation values (-103.91 ± 12.99 HU, -102.53 ± 12.68 HU, -101.14 ± 12.78 HU, -101.81 ± 12.41 HU, -100.87 ± 12.25 HU, -99.08 ± 12.00 HU vs. -105.95 ± 13.01 HU, all p < 0.001). When the fat threshold was set at -190 to -30 HU, ASIR-V and DLIR algorithms significantly increased PCAT and EAT attenuation values compared to FBP algorithm (all p < 0.05), with these values increasing as the reconstruction intensity level increased. After correction with a fat threshold of -200 to -35 HU for ASIR-V 30 %, -200 to -40 HU for ASIR-V 50 % and DLIR-L, and -200 to -45 HU for ASIR-V 70 %, DLIR-M, and DLIR-H, the mean differences in PCAT attenuation values between ASIR-V or DLIR and FBP algorithms decreased (-0.03 to 1.68 HU vs. 2.35 to 8.69 HU), and no significant difference was found in PCAT attenuation values between FBP and ASIR-V 30 %, ASIR-V 50 %, ASIR-V 70 %, DLIR-L, and DLIR-M (all p > 0.05). Compared to the FBP algorithm, ASIR-V and DLIR algorithms increase PCAT and EAT attenuation values. Adjusting the fat threshold can mitigate the impact of ASIR-V and DLIR algorithms on PCAT attenuation values.

Zero-shot segmentation of spinal vertebrae with metastatic lesions: an analysis of Meta's Segment Anything Model 2 and factors affecting learning free segmentation.

Khazanchi R, Govind S, Jain R, Du R, Dahdaleh NS, Ahuja CS, El Tecle N

pubmed logopapersJul 1 2025
Accurate vertebral segmentation is an important step in imaging analysis pipelines for diagnosis and subsequent treatment of spinal metastases. Segmenting these metastases is especially challenging given their radiological heterogeneity. Conventional approaches for segmenting vertebrae have included manual review or deep learning; however, manual review is time-intensive with interrater reliability issues, while deep learning requires large datasets to build. The rise of generative AI, notably tools such as Meta's Segment Anything Model 2 (SAM 2), holds promise in its ability to rapidly generate segmentations of any image without pretraining (zero-shot). The authors of this study aimed to assess the ability of SAM 2 to segment vertebrae with metastases. A publicly available set of spinal CT scans from The Cancer Imaging Archive was used, which included patient sex, BMI, vertebral locations, types of metastatic lesion (lytic, blastic, or mixed), and primary cancer type. Ground-truth segmentations for each vertebra, derived by neuroradiologists, were further extracted from the dataset. SAM 2 then produced segmentations for each vertebral slice without any training data, all of which were compared to gold standard segmentations using the Dice similarity coefficient (DSC). Relative performance differences were assessed across clinical subgroups using standard statistical techniques. Imaging data were extracted for 55 patients and 779 unique thoracolumbar vertebrae, 167 of which had metastatic tumor involvement. Across these vertebrae, SAM 2 had a mean volumetric DSC of 0.833 ± 0.053. SAM 2 performed significantly worse on thoracic vertebrae relative to lumbar vertebrae, female patients relative to male patients, and obese patients relative to non-obese patients. These results demonstrate that general-purpose segmentation models like SAM 2 can provide reasonable vertebral segmentation accuracy with no pretraining, with efficacy comparable to previously published trained models. Future research should include optimizations of spine segmentation models for vertebral location and patient body habitus, as well as for variations in imaging quality approaches.

Integrated brain connectivity analysis with fMRI, DTI, and sMRI powered by interpretable graph neural networks.

Qu G, Zhou Z, Calhoun VD, Zhang A, Wang YP

pubmed logopapersJul 1 2025
Multimodal neuroimaging data modeling has become a widely used approach but confronts considerable challenges due to their heterogeneity, which encompasses variability in data types, scales, and formats across modalities. This variability necessitates the deployment of advanced computational methods to integrate and interpret diverse datasets within a cohesive analytical framework. In our research, we combine functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and structural MRI (sMRI) for joint analysis. This integration capitalizes on the unique strengths of each modality and their inherent interconnections, aiming for a comprehensive understanding of the brain's connectivity and anatomical characteristics. Utilizing the Glasser atlas for parcellation, we integrate imaging-derived features from multiple modalities - functional connectivity from fMRI, structural connectivity from DTI, and anatomical features from sMRI - within consistent regions. Our approach incorporates a masking strategy to differentially weight neural connections, thereby facilitating an amalgamation of multimodal imaging data. This technique enhances interpretability at the connectivity level, transcending traditional analyses centered on singular regional attributes. The model is applied to the Human Connectome Project's Development study to elucidate the associations between multimodal imaging and cognitive functions throughout youth. The analysis demonstrates improved prediction accuracy and uncovers crucial anatomical features and neural connections, deepening our understanding of brain structure and function. This study not only advances multimodal neuroimaging analytics by offering a novel method for integrative analysis of diverse imaging modalities but also improves the understanding of intricate relationships between brain's structural and functional networks and cognitive development.

A vision transformer-convolutional neural network framework for decision-transparent dual-energy X-ray absorptiometry recommendations using chest low-dose CT.

Kuo DP, Chen YC, Cheng SJ, Hsieh KL, Li YT, Kuo PC, Chang YC, Chen CY

pubmed logopapersJul 1 2025
This study introduces an ensemble framework that integrates Vision Transformer (ViT) and Convolutional Neural Networks (CNN) models to leverage their complementary strengths, generating visualized and decision-transparent recommendations for dual-energy X-ray absorptiometry (DXA) scans from chest low-dose computed tomography (LDCT). The framework was developed using data from 321 individuals and validated with an independent test cohort of 186 individuals. It addresses two classification tasks: (1) distinguishing normal from abnormal bone mineral density (BMD) and (2) differentiating osteoporosis from non-osteoporosis. Three field-of-view (FOV) settings-fitFOV (entire vertebra), halfFOV (vertebral body only), and largeFOV (fitFOV + 20 %)-were analyzed to assess their impact on model performance. Model predictions were weighted and combined to enhance classification accuracy, and visualizations were generated to improve decision transparency. DXA scans were recommended for individuals classified as having abnormal BMD or osteoporosis. The ensemble framework significantly outperformed individual models in both classification tasks (McNemar test, p < 0.001). In the development cohort, it achieved 91.6 % accuracy for task 1 with largeFOV (area under the receiver operating characteristic curve [AUROC]: 0.97) and 86.0 % accuracy for task 2 with fitFOV (AUROC: 0.94). In the test cohort, it demonstrated 86.6 % accuracy for task 1 (AUROC: 0.93) and 76.9 % accuracy for task 2 (AUROC: 0.99). DXA recommendation accuracy was 91.6 % and 87.1 % in the development and test cohorts, respectively, with notably high accuracy for osteoporosis detection (98.7 % and 100 %). This combined ViT-CNN framework effectively assesses bone status from LDCT images, particularly when utilizing fitFOV and largeFOV settings. By visualizing classification confidence and vertebral abnormalities, the proposed framework enhances decision transparency and supports clinicians in making informed DXA recommendations following opportunistic osteoporosis screening.

A systematic review of generative AI approaches for medical image enhancement: Comparing GANs, transformers, and diffusion models.

Oulmalme C, Nakouri H, Jaafar F

pubmed logopapersJul 1 2025
Medical imaging is a vital diagnostic tool that provides detailed insights into human anatomy but faces challenges affecting its accuracy and efficiency. Advanced generative AI models offer promising solutions. Unlike previous reviews with a narrow focus, a comprehensive evaluation across techniques and modalities is necessary. This systematic review integrates the three state-of-the-art leading approaches, GANs, Diffusion Models, and Transformers, examining their applicability, methodologies, and clinical implications in improving medical image quality. Using the PRISMA framework, 63 studies from 989 were selected via Google Scholar and PubMed, focusing on GANs, Transformers, and Diffusion Models. Articles from ACM, IEEE Xplore, and Springer were analyzed. Generative AI techniques show promise in improving image resolution, reducing noise, and enhancing fidelity. GANs generate high-quality images, Transformers utilize global context, and Diffusion Models are effective in denoising and reconstruction. Challenges include high computational costs, limited dataset diversity, and issues with generalizability, with a focus on quantitative metrics over clinical applicability. This review highlights the transformative impact of GANs, Transformers, and Diffusion Models in advancing medical imaging. Future research must address computational and generalization challenges, emphasize open science, and validate these techniques in diverse clinical settings to unlock their full potential. These efforts could enhance diagnostic accuracy, lower costs, and improve patient outcome.

Multiparametric MRI for Assessment of the Biological Invasiveness and Prognosis of Pancreatic Ductal Adenocarcinoma in the Era of Artificial Intelligence.

Zhao B, Cao B, Xia T, Zhu L, Yu Y, Lu C, Tang T, Wang Y, Ju S

pubmed logopapersJul 1 2025
Pancreatic ductal adenocarcinoma (PDAC) is the deadliest malignant tumor, with a grim 5-year overall survival rate of about 12%. As its incidence and mortality rates rise, it is likely to become the second-leading cause of cancer-related death. The radiological assessment determined the stage and management of PDAC. However, it is a highly heterogeneous disease with the complexity of the tumor microenvironment, and it is challenging to adequately reflect the biological aggressiveness and prognosis accurately through morphological evaluation alone. With the dramatic development of artificial intelligence (AI), multiparametric magnetic resonance imaging (mpMRI) using specific contrast media and special techniques can provide morphological and functional information with high image quality and become a powerful tool in quantifying intratumor characteristics. Besides, AI has been widespread in the field of medical imaging analysis. Radiomics is the high-throughput mining of quantitative image features from medical imaging that enables data to be extracted and applied for better decision support. Deep learning is a subset of artificial neural network algorithms that can automatically learn feature representations from data. AI-enabled imaging biomarkers of mpMRI have enormous promise to bridge the gap between medical imaging and personalized medicine and demonstrate huge advantages in predicting biological characteristics and the prognosis of PDAC. However, current AI-based models of PDAC operate mainly in the realm of a single modality with a relatively small sample size, and the technical reproducibility and biological interpretation present a barrage of new potential challenges. In the future, the integration of multi-omics data, such as radiomics and genomics, alongside the establishment of standardized analytical frameworks will provide opportunities to increase the robustness and interpretability of AI-enabled image biomarkers and bring these biomarkers closer to clinical practice. EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 4.

CXR-LLaVA: a multimodal large language model for interpreting chest X-ray images.

Lee S, Youn J, Kim H, Kim M, Yoon SH

pubmed logopapersJul 1 2025
This study aimed to develop an open-source multimodal large language model (CXR-LLaVA) for interpreting chest X-ray images (CXRs), leveraging recent advances in large language models (LLMs) to potentially replicate the image interpretation skills of human radiologists. For training, we collected 592,580 publicly available CXRs, of which 374,881 had labels for certain radiographic abnormalities (Dataset 1) and 217,699 provided free-text radiology reports (Dataset 2). After pre-training a vision transformer with Dataset 1, we integrated it with an LLM influenced by the LLaVA network. Then, the model was fine-tuned, primarily using Dataset 2. The model's diagnostic performance for major pathological findings was evaluated, along with the acceptability of radiologic reports by human radiologists, to gauge its potential for autonomous reporting. The model demonstrated impressive performance in test sets, achieving an average F1 score of 0.81 for six major pathological findings in the MIMIC internal test set and 0.56 for six major pathological findings in the external test set. The model's F1 scores surpassed those of GPT-4-vision and Gemini-Pro-Vision in both test sets. In human radiologist evaluations of the external test set, the model achieved a 72.7% success rate in autonomous reporting, slightly below the 84.0% rate of ground truth reports. This study highlights the significant potential of multimodal LLMs for CXR interpretation, while also acknowledging the performance limitations. Despite these challenges, we believe that making our model open-source will catalyze further research, expanding its effectiveness and applicability in various clinical contexts. Question How can a multimodal large language model be adapted to interpret chest X-rays and generate radiologic reports? Findings The developed CXR-LLaVA model effectively detects major pathological findings in chest X-rays and generates radiologic reports with a higher accuracy compared to general-purpose models. Clinical relevance This study demonstrates the potential of multimodal large language models to support radiologists by autonomously generating chest X-ray reports, potentially reducing diagnostic workloads and improving radiologist efficiency.
Page 44 of 79781 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.