Sort by:
Page 246 of 3853843 results

AI based automatic measurement of split renal function in [<sup>18</sup>F]PSMA-1007 PET/CT.

Valind K, Ulén J, Gålne A, Jögi J, Minarik D, Trägårdh E

pubmed logopapersJun 16 2025
Prostate-specific membrane antigen (PSMA) is an important target for positron emission tomography (PET) with computed tomography (CT) in prostate cancer. In addition to overexpression in prostate cancer cells, PSMA is expressed in healthy cells in the proximal tubules of the kidneys. Consequently, PSMA PET is being explored for renal functional imaging. Left and right renal uptake of PSMA targeted radiopharmaceuticals have shown strong correlations to split renal function (SRF) as determined by other methods. Manual segmentation of kidneys in PET images is, however, time consuming, making this method of measuring SRF impractical. In this study, we designed, trained and validated an artificial intelligence (AI) model for automatic renal segmentation and measurement of SRF in [<sup>18</sup>F]PSMA-1007 PET images. Kidneys were segmented in 135 [<sup>18</sup>F]PSMA-1007 PET/CT studies used to train the AI model. The model was evaluated in 40 test studies. Left renal function percentage (LRF%) measurements ranged from 40 to 67%. Spearman correlation coefficients for LRF% measurements ranged between 0.98 and 0.99 when comparing segmentations made by 3 human readers and the AI model. The largest LRF% difference between any measurements in a single case was 3 percentage points. The AI model produced measurements similar to those of human readers. Automatic measurement of SRF in PSMA PET is feasible. A potential use could be to provide additional data in investigation of renal functional impairment in patients treated for prostate cancer.

Two-stage convolutional neural network for segmentation and detection of carotid web on CT angiography.

Kuang H, Tan X, Bala F, Huang J, Zhang J, Alhabli I, Benali F, Singh N, Ganesh A, Coutts SB, Almekhlafi MA, Goyal M, Hill MD, Qiu W, Menon BK

pubmed logopapersJun 16 2025
Carotid web (CaW) is a risk factor for ischemic stroke, mainly in young patients with stroke of undetermined etiology. Its detection is challenging, especially among non-experienced physicians. We included patients with CaW from six international trials and registries of patients with acute ischemic stroke. Identification and manual segmentations of CaW were performed by three trained radiologists. We designed a two-stage segmentation strategy based on a convolutional neural network (CNN). At the first stage, the two carotid arteries were segmented using a U-shaped CNN. At the second stage, the segmentation of the CaW was first confined to the vicinity of the carotid arteries. Then, the carotid bifurcation region was localized by the proposed carotid bifurcation localization algorithm followed by another U-shaped CNN. A volume threshold based on the derived CaW manual segmentation statistics was then used to determine whether or not CaW was present. We included 58 patients (median (IQR) age 59 (50-75) years, 60% women). The Dice similarity coefficient and 95th percentile Hausdorff distance between manually segmented CaW and the algorithm segmented CaW were 63.20±19.03% and 1.19±0.9 mm, respectively. Using a volume threshold of 5 mm<sup>3</sup>, binary classification detection metrics for CaW on a single artery were as follows: accuracy: 92.2% (95% CI 87.93% to 96.55%), precision: 94.83% (95% CI 88.68% to 100.00%), sensitivity: 90.16% (95% CI 82.16% to 96.97%), specificity: 94.55% (95% CI 88.0% to 100.0%), F1 measure: 0.9244 (95% CI 0.8679 to 0.9692), area under the curve: 0.9235 (95%CI 0.8726 to 0.9688). The proposed two-stage method enables reliable segmentation and detection of CaW from head and neck CT angiography.

PRO: Projection Domain Synthesis for CT Imaging

Kang Chen, Bin Huang, Xuebin Yang, Junyan Zhang, Qiegen Liu

arxiv logopreprintJun 16 2025
Synthesizing high quality CT images remains a signifi-cant challenge due to the limited availability of annotat-ed data and the complex nature of CT imaging. In this work, we present PRO, a novel framework that, to the best of our knowledge, is the first to perform CT image synthesis in the projection domain using latent diffusion models. Unlike previous approaches that operate in the image domain, PRO learns rich structural representa-tions from raw projection data and leverages anatomi-cal text prompts for controllable synthesis. This projec-tion domain strategy enables more faithful modeling of underlying imaging physics and anatomical structures. Moreover, PRO functions as a foundation model, capa-ble of generalizing across diverse downstream tasks by adjusting its generative behavior via prompt inputs. Experimental results demonstrated that incorporating our synthesized data significantly improves perfor-mance across multiple downstream tasks, including low-dose and sparse-view reconstruction, even with limited training data. These findings underscore the versatility and scalability of PRO in data generation for various CT applications. These results highlight the potential of projection domain synthesis as a powerful tool for data augmentation and robust CT imaging. Our source code is publicly available at: https://github.com/yqx7150/PRO.

ViT-NeBLa: A Hybrid Vision Transformer and Neural Beer-Lambert Framework for Single-View 3D Reconstruction of Oral Anatomy from Panoramic Radiographs

Bikram Keshari Parida, Anusree P. Sunilkumar, Abhijit Sen, Wonsang You

arxiv logopreprintJun 16 2025
Dental diagnosis relies on two primary imaging modalities: panoramic radiographs (PX) providing 2D oral cavity representations, and Cone-Beam Computed Tomography (CBCT) offering detailed 3D anatomical information. While PX images are cost-effective and accessible, their lack of depth information limits diagnostic accuracy. CBCT addresses this but presents drawbacks including higher costs, increased radiation exposure, and limited accessibility. Existing reconstruction models further complicate the process by requiring CBCT flattening or prior dental arch information, often unavailable clinically. We introduce ViT-NeBLa, a vision transformer-based Neural Beer-Lambert model enabling accurate 3D reconstruction directly from single PX. Our key innovations include: (1) enhancing the NeBLa framework with Vision Transformers for improved reconstruction capabilities without requiring CBCT flattening or prior dental arch information, (2) implementing a novel horseshoe-shaped point sampling strategy with non-intersecting rays that eliminates intermediate density aggregation required by existing models due to intersecting rays, reducing sampling point computations by $52 \%$, (3) replacing CNN-based U-Net with a hybrid ViT-CNN architecture for superior global and local feature extraction, and (4) implementing learnable hash positional encoding for better higher-dimensional representation of 3D sample points compared to existing Fourier-based dense positional encoding. Experiments demonstrate that ViT-NeBLa significantly outperforms prior state-of-the-art methods both quantitatively and qualitatively, offering a cost-effective, radiation-efficient alternative for enhanced dental diagnostics.

MultiViT2: A Data-augmented Multimodal Neuroimaging Prediction Framework via Latent Diffusion Model

Bi Yuda, Jia Sihan, Gao Yutong, Abrol Anees, Fu Zening, Calhoun Vince

arxiv logopreprintJun 16 2025
Multimodal medical imaging integrates diverse data types, such as structural and functional neuroimaging, to provide complementary insights that enhance deep learning predictions and improve outcomes. This study focuses on a neuroimaging prediction framework based on both structural and functional neuroimaging data. We propose a next-generation prediction model, \textbf{MultiViT2}, which combines a pretrained representative learning base model with a vision transformer backbone for prediction output. Additionally, we developed a data augmentation module based on the latent diffusion model that enriches input data by generating augmented neuroimaging samples, thereby enhancing predictive performance through reduced overfitting and improved generalizability. We show that MultiViT2 significantly outperforms the first-generation model in schizophrenia classification accuracy and demonstrates strong scalability and portability.

Beyond the First Read: AI-Assisted Perceptual Error Detection in Chest Radiography Accounting for Interobserver Variability

Adhrith Vutukuri, Akash Awasthi, David Yang, Carol C. Wu, Hien Van Nguyen

arxiv logopreprintJun 16 2025
Chest radiography is widely used in diagnostic imaging. However, perceptual errors -- especially overlooked but visible abnormalities -- remain common and clinically significant. Current workflows and AI systems provide limited support for detecting such errors after interpretation and often lack meaningful human--AI collaboration. We introduce RADAR (Radiologist--AI Diagnostic Assistance and Review), a post-interpretation companion system. RADAR ingests finalized radiologist annotations and CXR images, then performs regional-level analysis to detect and refer potentially missed abnormal regions. The system supports a "second-look" workflow and offers suggested regions of interest (ROIs) rather than fixed labels to accommodate inter-observer variation. We evaluated RADAR on a simulated perceptual-error dataset derived from de-identified CXR cases, using F1 score and Intersection over Union (IoU) as primary metrics. RADAR achieved a recall of 0.78, precision of 0.44, and an F1 score of 0.56 in detecting missed abnormalities in the simulated perceptual-error dataset. Although precision is moderate, this reduces over-reliance on AI by encouraging radiologist oversight in human--AI collaboration. The median IoU was 0.78, with more than 90% of referrals exceeding 0.5 IoU, indicating accurate regional localization. RADAR effectively complements radiologist judgment, providing valuable post-read support for perceptual-error detection in CXR interpretation. Its flexible ROI suggestions and non-intrusive integration position it as a promising tool in real-world radiology workflows. To facilitate reproducibility and further evaluation, we release a fully open-source web implementation alongside a simulated error dataset. All code, data, demonstration videos, and the application are publicly available at https://github.com/avutukuri01/RADAR.

Default Mode Network Connectivity Predicts Individual Differences in Long-Term Forgetting: Evidence for Storage Degradation, not Retrieval Failure

Xu, Y., Prat, C. S., Sense, F., van Rijn, H., Stocco, A.

biorxiv logopreprintJun 16 2025
Despite the importance of memories in everyday life and the progress made in understanding how they are encoded and retrieved, the neural processes by which declarative memories are maintained or forgotten remain elusive. Part of the problem is that it is empirically difficult to measure the rate at which memories fade, even between repeated presentations of the source of the memory. Without such a ground-truth measure, it is hard to identify the corresponding neural correlates. This study addresses this problem by comparing individual patterns of functional connectivity against behavioral differences in forgetting speed derived from computational phenotyping. Specifically, the individual-specific values of the speed of forgetting in long-term memory (LTM) were estimated for 33 participants using a formal model fit to accuracy and response time data from an adaptive paired-associate learning task. Individual speeds of forgetting were then used to examine participant-specific patterns of resting-state fMRI connectivity, using machine learning techniques to identify the most predictive and generalizable features. Our results show that individual speeds of forgetting are associated with resting-state connectivity within the default mode network (DMN) as well as between the DMN and cortical sensory areas. Cross-validation showed that individual speeds of forgetting were predicted with high accuracy (r = .78) from these connectivity patterns alone. These results support the view that DMN activity and the associated sensory regions are actively involved in maintaining memories and preventing their decline, a view that can be seen as evidence for the hypothesis that forgetting is a result of storage degradation, rather than of retrieval failure.

An 11,000-Study Open-Access Dataset of Longitudinal Magnetic Resonance Images of Brain Metastases

Saahil Chadha, David Weiss, Anastasia Janas, Divya Ramakrishnan, Thomas Hager, Klara Osenberg, Klara Willms, Joshua Zhu, Veronica Chiang, Spyridon Bakas, Nazanin Maleki, Durga V. Sritharan, Sven Schoenherr, Malte Westerhoff, Matthew Zawalich, Melissa Davis, Ajay Malhotra, Khaled Bousabarah, Cornelius Deuschl, MingDe Lin, Sanjay Aneja, Mariam S. Aboian

arxiv logopreprintJun 16 2025
Brain metastases are a common complication of systemic cancer, affecting over 20% of patients with primary malignancies. Longitudinal magnetic resonance imaging (MRI) is essential for diagnosing patients, tracking disease progression, assessing therapeutic response, and guiding treatment selection. However, the manual review of longitudinal imaging is time-intensive, especially for patients with multifocal disease. Artificial intelligence (AI) offers opportunities to streamline image evaluation, but developing robust AI models requires comprehensive training data representative of real-world imaging studies. Thus, there is an urgent necessity for a large dataset with heterogeneity in imaging protocols and disease presentation. To address this, we present an open-access dataset of 11,884 longitudinal brain MRI studies from 1,430 patients with clinically confirmed brain metastases, paired with clinical and image metadata. The provided dataset will facilitate the development of AI models to assist in the long-term management of patients with brain metastasis.

Evaluating Explainability: A Framework for Systematic Assessment and Reporting of Explainable AI Features

Miguel A. Lago, Ghada Zamzmi, Brandon Eich, Jana G. Delfino

arxiv logopreprintJun 16 2025
Explainability features are intended to provide insight into the internal mechanisms of an AI device, but there is a lack of evaluation techniques for assessing the quality of provided explanations. We propose a framework to assess and report explainable AI features. Our evaluation framework for AI explainability is based on four criteria: 1) Consistency quantifies the variability of explanations to similar inputs, 2) Plausibility estimates how close the explanation is to the ground truth, 3) Fidelity assesses the alignment between the explanation and the model internal mechanisms, and 4) Usefulness evaluates the impact on task performance of the explanation. Finally, we developed a scorecard for AI explainability methods that serves as a complete description and evaluation to accompany this type of algorithm. We describe these four criteria and give examples on how they can be evaluated. As a case study, we use Ablation CAM and Eigen CAM to illustrate the evaluation of explanation heatmaps on the detection of breast lesions on synthetic mammographies. The first three criteria are evaluated for clinically-relevant scenarios. Our proposed framework establishes criteria through which the quality of explanations provided by AI models can be evaluated. We intend for our framework to spark a dialogue regarding the value provided by explainability features and help improve the development and evaluation of AI-based medical devices.

Kernelized weighted local information based picture fuzzy clustering with multivariate coefficient of variation and modified total Bregman divergence measure for brain MRI image segmentation.

Lohit H, Kumar D

pubmed logopapersJun 16 2025
This paper proposes a novel clustering method for noisy image segmentation using a kernelized weighted local information approach under the Picture Fuzzy Set (PFS) framework. Existing kernel-based fuzzy clustering methods struggle with noisy environments and non-linear structures, while intuitionistic fuzzy clustering methods face limitations in handling uncertainty in real-world medical images. To address these challenges, we introduce a local picture fuzzy information measure, developed for the first time using Multivariate Coefficient of Variation (MCV) theory, enhancing robustness in segmentation. Additionally, we integrate non-Euclidean distance measures, including kernel distance for local information computation and modified total Bregman divergence (MTBD) measure for improving clustering accuracy. This combination enhances both local spatial consistency and global membership estimation, leading to precise segmentation. The proposed method is extensively evaluated on synthetic images with Gaussian, Salt and Pepper, and mixed noise, along with Brainweb, IBSR, and MRBrainS18 MRI datasets under varying Rician noise levels, and a CT image template. Furthermore, we benchmark our proposed method against two deep learning-based segmentation models, ResNet34-LinkNet and patch-based U-Net. Experimental results demonstrate significant improvements in segmentation accuracy, as validated by metrics such as Dice Score, Fuzzy Performance Index, Modified Partition Entropy, Average Volume Difference (AVD), and the XB index. Additionally, Friedman's statistical test confirms the superior performance of our approach compared to state-of-the-art clustering methods for noisy image segmentation. To facilitate reproducibility, the implementation of our proposed method is made publicly available at: Google Drive Repository.
Page 246 of 3853843 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.