Sort by:
Page 78 of 1611605 results

AI based automatic measurement of split renal function in [<sup>18</sup>F]PSMA-1007 PET/CT.

Valind K, Ulén J, Gålne A, Jögi J, Minarik D, Trägårdh E

pubmed logopapersJun 16 2025
Prostate-specific membrane antigen (PSMA) is an important target for positron emission tomography (PET) with computed tomography (CT) in prostate cancer. In addition to overexpression in prostate cancer cells, PSMA is expressed in healthy cells in the proximal tubules of the kidneys. Consequently, PSMA PET is being explored for renal functional imaging. Left and right renal uptake of PSMA targeted radiopharmaceuticals have shown strong correlations to split renal function (SRF) as determined by other methods. Manual segmentation of kidneys in PET images is, however, time consuming, making this method of measuring SRF impractical. In this study, we designed, trained and validated an artificial intelligence (AI) model for automatic renal segmentation and measurement of SRF in [<sup>18</sup>F]PSMA-1007 PET images. Kidneys were segmented in 135 [<sup>18</sup>F]PSMA-1007 PET/CT studies used to train the AI model. The model was evaluated in 40 test studies. Left renal function percentage (LRF%) measurements ranged from 40 to 67%. Spearman correlation coefficients for LRF% measurements ranged between 0.98 and 0.99 when comparing segmentations made by 3 human readers and the AI model. The largest LRF% difference between any measurements in a single case was 3 percentage points. The AI model produced measurements similar to those of human readers. Automatic measurement of SRF in PSMA PET is feasible. A potential use could be to provide additional data in investigation of renal functional impairment in patients treated for prostate cancer.

Two-stage convolutional neural network for segmentation and detection of carotid web on CT angiography.

Kuang H, Tan X, Bala F, Huang J, Zhang J, Alhabli I, Benali F, Singh N, Ganesh A, Coutts SB, Almekhlafi MA, Goyal M, Hill MD, Qiu W, Menon BK

pubmed logopapersJun 16 2025
Carotid web (CaW) is a risk factor for ischemic stroke, mainly in young patients with stroke of undetermined etiology. Its detection is challenging, especially among non-experienced physicians. We included patients with CaW from six international trials and registries of patients with acute ischemic stroke. Identification and manual segmentations of CaW were performed by three trained radiologists. We designed a two-stage segmentation strategy based on a convolutional neural network (CNN). At the first stage, the two carotid arteries were segmented using a U-shaped CNN. At the second stage, the segmentation of the CaW was first confined to the vicinity of the carotid arteries. Then, the carotid bifurcation region was localized by the proposed carotid bifurcation localization algorithm followed by another U-shaped CNN. A volume threshold based on the derived CaW manual segmentation statistics was then used to determine whether or not CaW was present. We included 58 patients (median (IQR) age 59 (50-75) years, 60% women). The Dice similarity coefficient and 95th percentile Hausdorff distance between manually segmented CaW and the algorithm segmented CaW were 63.20±19.03% and 1.19±0.9 mm, respectively. Using a volume threshold of 5 mm<sup>3</sup>, binary classification detection metrics for CaW on a single artery were as follows: accuracy: 92.2% (95% CI 87.93% to 96.55%), precision: 94.83% (95% CI 88.68% to 100.00%), sensitivity: 90.16% (95% CI 82.16% to 96.97%), specificity: 94.55% (95% CI 88.0% to 100.0%), F1 measure: 0.9244 (95% CI 0.8679 to 0.9692), area under the curve: 0.9235 (95%CI 0.8726 to 0.9688). The proposed two-stage method enables reliable segmentation and detection of CaW from head and neck CT angiography.

PRO: Projection Domain Synthesis for CT Imaging

Kang Chen, Bin Huang, Xuebin Yang, Junyan Zhang, Qiegen Liu

arxiv logopreprintJun 16 2025
Synthesizing high quality CT images remains a signifi-cant challenge due to the limited availability of annotat-ed data and the complex nature of CT imaging. In this work, we present PRO, a novel framework that, to the best of our knowledge, is the first to perform CT image synthesis in the projection domain using latent diffusion models. Unlike previous approaches that operate in the image domain, PRO learns rich structural representa-tions from raw projection data and leverages anatomi-cal text prompts for controllable synthesis. This projec-tion domain strategy enables more faithful modeling of underlying imaging physics and anatomical structures. Moreover, PRO functions as a foundation model, capa-ble of generalizing across diverse downstream tasks by adjusting its generative behavior via prompt inputs. Experimental results demonstrated that incorporating our synthesized data significantly improves perfor-mance across multiple downstream tasks, including low-dose and sparse-view reconstruction, even with limited training data. These findings underscore the versatility and scalability of PRO in data generation for various CT applications. These results highlight the potential of projection domain synthesis as a powerful tool for data augmentation and robust CT imaging. Our source code is publicly available at: https://github.com/yqx7150/PRO.

ViT-NeBLa: A Hybrid Vision Transformer and Neural Beer-Lambert Framework for Single-View 3D Reconstruction of Oral Anatomy from Panoramic Radiographs

Bikram Keshari Parida, Anusree P. Sunilkumar, Abhijit Sen, Wonsang You

arxiv logopreprintJun 16 2025
Dental diagnosis relies on two primary imaging modalities: panoramic radiographs (PX) providing 2D oral cavity representations, and Cone-Beam Computed Tomography (CBCT) offering detailed 3D anatomical information. While PX images are cost-effective and accessible, their lack of depth information limits diagnostic accuracy. CBCT addresses this but presents drawbacks including higher costs, increased radiation exposure, and limited accessibility. Existing reconstruction models further complicate the process by requiring CBCT flattening or prior dental arch information, often unavailable clinically. We introduce ViT-NeBLa, a vision transformer-based Neural Beer-Lambert model enabling accurate 3D reconstruction directly from single PX. Our key innovations include: (1) enhancing the NeBLa framework with Vision Transformers for improved reconstruction capabilities without requiring CBCT flattening or prior dental arch information, (2) implementing a novel horseshoe-shaped point sampling strategy with non-intersecting rays that eliminates intermediate density aggregation required by existing models due to intersecting rays, reducing sampling point computations by $52 \%$, (3) replacing CNN-based U-Net with a hybrid ViT-CNN architecture for superior global and local feature extraction, and (4) implementing learnable hash positional encoding for better higher-dimensional representation of 3D sample points compared to existing Fourier-based dense positional encoding. Experiments demonstrate that ViT-NeBLa significantly outperforms prior state-of-the-art methods both quantitatively and qualitatively, offering a cost-effective, radiation-efficient alternative for enhanced dental diagnostics.

Beyond the First Read: AI-Assisted Perceptual Error Detection in Chest Radiography Accounting for Interobserver Variability

Adhrith Vutukuri, Akash Awasthi, David Yang, Carol C. Wu, Hien Van Nguyen

arxiv logopreprintJun 16 2025
Chest radiography is widely used in diagnostic imaging. However, perceptual errors -- especially overlooked but visible abnormalities -- remain common and clinically significant. Current workflows and AI systems provide limited support for detecting such errors after interpretation and often lack meaningful human--AI collaboration. We introduce RADAR (Radiologist--AI Diagnostic Assistance and Review), a post-interpretation companion system. RADAR ingests finalized radiologist annotations and CXR images, then performs regional-level analysis to detect and refer potentially missed abnormal regions. The system supports a "second-look" workflow and offers suggested regions of interest (ROIs) rather than fixed labels to accommodate inter-observer variation. We evaluated RADAR on a simulated perceptual-error dataset derived from de-identified CXR cases, using F1 score and Intersection over Union (IoU) as primary metrics. RADAR achieved a recall of 0.78, precision of 0.44, and an F1 score of 0.56 in detecting missed abnormalities in the simulated perceptual-error dataset. Although precision is moderate, this reduces over-reliance on AI by encouraging radiologist oversight in human--AI collaboration. The median IoU was 0.78, with more than 90% of referrals exceeding 0.5 IoU, indicating accurate regional localization. RADAR effectively complements radiologist judgment, providing valuable post-read support for perceptual-error detection in CXR interpretation. Its flexible ROI suggestions and non-intrusive integration position it as a promising tool in real-world radiology workflows. To facilitate reproducibility and further evaluation, we release a fully open-source web implementation alongside a simulated error dataset. All code, data, demonstration videos, and the application are publicly available at https://github.com/avutukuri01/RADAR.

Default Mode Network Connectivity Predicts Individual Differences in Long-Term Forgetting: Evidence for Storage Degradation, not Retrieval Failure

Xu, Y., Prat, C. S., Sense, F., van Rijn, H., Stocco, A.

biorxiv logopreprintJun 16 2025
Despite the importance of memories in everyday life and the progress made in understanding how they are encoded and retrieved, the neural processes by which declarative memories are maintained or forgotten remain elusive. Part of the problem is that it is empirically difficult to measure the rate at which memories fade, even between repeated presentations of the source of the memory. Without such a ground-truth measure, it is hard to identify the corresponding neural correlates. This study addresses this problem by comparing individual patterns of functional connectivity against behavioral differences in forgetting speed derived from computational phenotyping. Specifically, the individual-specific values of the speed of forgetting in long-term memory (LTM) were estimated for 33 participants using a formal model fit to accuracy and response time data from an adaptive paired-associate learning task. Individual speeds of forgetting were then used to examine participant-specific patterns of resting-state fMRI connectivity, using machine learning techniques to identify the most predictive and generalizable features. Our results show that individual speeds of forgetting are associated with resting-state connectivity within the default mode network (DMN) as well as between the DMN and cortical sensory areas. Cross-validation showed that individual speeds of forgetting were predicted with high accuracy (r = .78) from these connectivity patterns alone. These results support the view that DMN activity and the associated sensory regions are actively involved in maintaining memories and preventing their decline, a view that can be seen as evidence for the hypothesis that forgetting is a result of storage degradation, rather than of retrieval failure.

An 11,000-Study Open-Access Dataset of Longitudinal Magnetic Resonance Images of Brain Metastases

Saahil Chadha, David Weiss, Anastasia Janas, Divya Ramakrishnan, Thomas Hager, Klara Osenberg, Klara Willms, Joshua Zhu, Veronica Chiang, Spyridon Bakas, Nazanin Maleki, Durga V. Sritharan, Sven Schoenherr, Malte Westerhoff, Matthew Zawalich, Melissa Davis, Ajay Malhotra, Khaled Bousabarah, Cornelius Deuschl, MingDe Lin, Sanjay Aneja, Mariam S. Aboian

arxiv logopreprintJun 16 2025
Brain metastases are a common complication of systemic cancer, affecting over 20% of patients with primary malignancies. Longitudinal magnetic resonance imaging (MRI) is essential for diagnosing patients, tracking disease progression, assessing therapeutic response, and guiding treatment selection. However, the manual review of longitudinal imaging is time-intensive, especially for patients with multifocal disease. Artificial intelligence (AI) offers opportunities to streamline image evaluation, but developing robust AI models requires comprehensive training data representative of real-world imaging studies. Thus, there is an urgent necessity for a large dataset with heterogeneity in imaging protocols and disease presentation. To address this, we present an open-access dataset of 11,884 longitudinal brain MRI studies from 1,430 patients with clinically confirmed brain metastases, paired with clinical and image metadata. The provided dataset will facilitate the development of AI models to assist in the long-term management of patients with brain metastasis.

Evaluating Explainability: A Framework for Systematic Assessment and Reporting of Explainable AI Features

Miguel A. Lago, Ghada Zamzmi, Brandon Eich, Jana G. Delfino

arxiv logopreprintJun 16 2025
Explainability features are intended to provide insight into the internal mechanisms of an AI device, but there is a lack of evaluation techniques for assessing the quality of provided explanations. We propose a framework to assess and report explainable AI features. Our evaluation framework for AI explainability is based on four criteria: 1) Consistency quantifies the variability of explanations to similar inputs, 2) Plausibility estimates how close the explanation is to the ground truth, 3) Fidelity assesses the alignment between the explanation and the model internal mechanisms, and 4) Usefulness evaluates the impact on task performance of the explanation. Finally, we developed a scorecard for AI explainability methods that serves as a complete description and evaluation to accompany this type of algorithm. We describe these four criteria and give examples on how they can be evaluated. As a case study, we use Ablation CAM and Eigen CAM to illustrate the evaluation of explanation heatmaps on the detection of breast lesions on synthetic mammographies. The first three criteria are evaluated for clinically-relevant scenarios. Our proposed framework establishes criteria through which the quality of explanations provided by AI models can be evaluated. We intend for our framework to spark a dialogue regarding the value provided by explainability features and help improve the development and evaluation of AI-based medical devices.

First experiences with an adaptive pelvic radiotherapy system: Analysis of treatment times and learning curve.

Benzaquen D, Taussky D, Fave V, Bouveret J, Lamine F, Letenneur G, Halley A, Solmaz Y, Champion A

pubmed logopapersJun 16 2025
The Varian Ethos system allows not only on-treatment-table plan adaptation but also automated contouring with the aid of artificial intelligence. This study evaluates the initial clinical implementation of an adaptive pelvic radiotherapy system, focusing on the treatment times and the associated learning curve. We analyzed the data from 903 consecutive treatments for most urogenital cancers at our center. The treatment time was calculated from the time of the first cone-beam computed tomography scan used for replanning until the end of treatment. To calculate whether treatments were generally shorter over time, we divided the date of the first treatment into 3-months quartiles. Differences between the groups were calculated using t-tests. The mean time from the first cone-beam computed tomography scan to the end of treatment was 25.9min (standard deviation: 6.9min). Treatment time depended on the number of planning target volumes and treatment of the pelvic lymph nodes. The mean time from cone-beam computed tomography to the end of treatment was 37 % longer if the pelvic lymph nodes were treated and 26 % longer if there were more than two planning target volumes. There was a learning curve: in linear regression analysis, both quartiles of months of treatment (odds ratio [OR]: 1.3, 95 % confidence interval [CI]: 1.8-0.70, P<0.001) and the number of planning target volumes (OR: 3.0, 95 % CI: 2.6-3.4, P<0.001) were predictive of treatment time. Approximately two-thirds of the treatments were delivered within 33min. Treatment time was strongly dependent on the number of separate planning target volumes. There was a continuous learning curve.

Roadmap analysis for coronary artery stenosis detection and percutaneous coronary intervention prediction in cardiac CT for transcatheter aortic valve replacement.

Fujito H, Jilaihawi H, Han D, Gransar H, Hashimoto H, Cho SW, Lee S, Gheyath B, Park RH, Patel D, Guo Y, Kwan AC, Hayes SW, Thomson LEJ, Slomka PJ, Dey D, Makkar R, Friedman JD, Berman DS

pubmed logopapersJun 16 2025
The new artificial intelligence-based software, Roadmap (HeartFlow), may assist in evaluating coronary artery stenosis during cardiac computed tomography (CT) for transcatheter aortic valve replacement (TAVR). Consecutive TAVR candidates who underwent both cardiac CT angiography (CTA) and invasive coronary angiography were enrolled. We evaluated the ability of three methods to predict obstructive coronary artery disease (CAD), defined as ≥50 ​% stenosis on quantitative coronary angiography (QCA), and the need for percutaneous coronary intervention (PCI) within one year: Roadmap, clinician CT specialists with Roadmap, and CT specialists alone. The area under the curve (AUC) for predicting QCA ≥50 ​% stenosis was similar for CT specialists with or without Roadmap (0.93 [0.85-0.97] vs. 0.94 [0.88-0.98], p ​= ​0.82), both significantly higher than Roadmap alone (all p ​< ​0.05). For PCI prediction, no significant differences were found between QCA and CT specialists, with or without Roadmap, while Roadmap's AUC was lower (all p ​< ​0.05). The negative predictive value (NPV) of CT specialists with Roadmap for ≥50 ​% stenosis was 97 ​%, and for PCI prediction, the NPV was comparable to QCA (p ​= ​1.00). In contrast, the positive predictive value (PPV) of Roadmap alone for ≥50 ​% stenosis was 49 ​%, the lowest among all approaches, with a similar trend observed for PCI prediction. While Roadmap alone is insufficient for clinical decision-making due to low PPV, Roadmap may serve as a "second observer", providing a supportive tool for CT specialists by flagging lesions for careful review, thereby enhancing workflow efficiency and maintaining high diagnostic accuracy with excellent NPV.
Page 78 of 1611605 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.