Sort by:
Page 412 of 7477461 results

Gibson, E., Ramirez, J., Woods, L. A., Berberian, S., Ottoy, J., Scott, C., Yhap, V., Gao, F., Coello, r. D., Valdes-Hernandez, m., Lange, A., Tartaglia, C., Kumar, S., Binns, M. A., Bartha, R., Symons, S., Swartz, R. H., Masellis, M., Singh, N., MacIntosh, B. J., Wardlaw, J. M., Black, S. E., Lim, A. S., Goubran, M.

medrxiv logopreprintJul 29 2025
IntroductionEnlarged perivascular spaces (PVS) are imaging markers of cerebral small vessel disease (CSVD) that are associated with age, disease phenotypes, and overall health. Quantification of PVS is challenging but necessary to expand an understanding of their role in cerebrovascular pathology. Accurate and automated segmentation of PVS on T1-weighted images would be valuable given the widespread use of T1-weighted imaging protocols in multisite clinical and research datasets. MethodsWe introduce segcsvdPVS, a convolutional neural network (CNN)-based tool for automated PVS segmentation on T1-weighted images. segcsvdPVS was developed using a novel hierarchical approach that builds on existing tools and incorporates robust training strategies to enhance the accuracy and consistency of PVS segmentation. Performance was evaluated using a comprehensive evaluation strategy that included comparison to existing benchmark methods, ablation-based validation, accuracy validation against manual ground truth annotations, correlation with age-related PVS burden as a biological benchmark, and extensive robustness testing. ResultssegcsvdPVS achieved strong object-level performance for basal ganglia PVS (DSC = 0.78), exhibiting both high sensitivity (SNS = 0.80) and precision (PRC = 0.78). Although voxel-level precision was lower (PRC = 0.57), manual correction improved this by only ~3%, indicating that the additional voxels reflected primary boundary- or extent-related differences rather than correctable false positive error. For non-basal ganglia PVS, segcsvdPVS outperformed benchmark methods, exhibiting higher voxel-level performance across several metrics (DSC = 0.60, SNS = 0.67, PRC = 0.57, NSD = 0.77), despite overall lower performance relative to basal ganglia PVS. Additionally, the association between age and segmentation-derived measures of PVS burden were consistently stronger and more reliable for segcsvdPVS compared to benchmark methods across three cohorts (test6, ADNI, CAHHM), providing further evidence of the accuracy and consistency of its segmentation output. ConclusionssegcsvdPVS demonstrates robust performance across diverse imaging conditions and improved sensitivity to biologically meaningful associations, supporting its utility as a T1-based PVS segmentation tool.

Trofimova, O., Böttger, L., Bors, S., Pan, Y., Liefers, B., Beyeler, M. J., Presby, D. M., Bontempi, D., Hastings, J., Klaver, C. C. W., Bergmann, S.

medrxiv logopreprintJul 29 2025
Retinal fundus images offer a non-invasive window into systemic aging. Here, we fine-tuned a foundation model (RETFound) to predict chronological age from color fundus images in 71,343 participants from the UK Biobank, achieving a mean absolute error of 2.85 years. The resulting retinal age gap (RAG), i.e., the difference between predicted and chronological age, was associated with cardiometabolic traits, inflammation, cognitive performance, mortality, dementia, cancer, and incident cardiovascular disease. Genome-wide analyses identified genes related to longevity, metabolism, neurodegeneration, and age-related eye diseases. Sex-stratified models revealed consistent performance but divergent biological signatures: males had younger-appearing retinas and stronger links to metabolic syndrome, while in females, both model attention and genetic associations pointed to a greater involvement of retinal vasculature. Our study positions retinal aging as a biologically meaningful and sex-sensitive biomarker that can support more personalized approaches to risk assessment and aging-related healthcare.

Bhaskara R, Oderinde OM

pubmed logopapersJul 28 2025
This study introduces a novel approach to improve Cone Beam CT (CBCT) image quality by developing a synthetic CT (sCT) generation method using CycleGAN with a Vision Transformer (ViT) and an Adaptive Fourier Neural Operator (AFNO). 

Approach: A dataset of 20 prostate cancer patients who received stereotactic body radiation therapy (SBRT) was used, consisting of paired CBCT and planning CT (pCT) images. The dataset was preprocessed by registering pCTs to CBCTs using deformation registration techniques, such as B-spline, followed by resampling to uniform voxel sizes and normalization. The model architecture integrates a CycleGAN with bidirectional generators, where the UNet generator is enhanced with a ViT at the bottleneck. AFNO functions as the attention mechanism for the ViT, operating on the input data in the Fourier domain. AFNO's innovations handle varying resolutions, mesh invariance, and efficient long-range dependency capture.

Main Results: Our model improved significantly in preserving anatomical details and capturing complex image dependencies. The AFNO mechanism processed global image information effectively, adapting to interpatient variations for accurate sCT generation. Evaluation metrics like Mean Absolute Error (MAE), Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Normalized Cross Correlation (NCC), demonstrated the superiority of our method. Specifically, the model achieved an MAE of 9.71, PSNR of 37.08 dB, SSIM of 0.97, and NCC of 0.99, confirming its efficacy. 

Significance: The integration of AFNO within the CycleGAN UNet framework addresses Cone Beam CT image quality limitations. The model generates synthetic CTs that allow adaptive treatment planning during SBRT, enabling adjustments to the dose based on tumor response, thus reducing radiotoxicity from increased doses. This method's ability to preserve both global and local anatomical features shows potential for improving tumor targeting, adaptive radiotherapy planning, and clinical decision-making.

Li L, Ma Q, Oyang C, Paetzold JC, Rueckert D, Kainz B

pubmed logopapersJul 28 2025
Deep learning-based medical image segmentation techniques have shown promising results when evaluated based on conventional metrics such as the Dice score or Intersection-over-Union. However, these fully automatic methods often fail to meet clinically acceptable accuracy, especially when topological constraints should be observed, e.g., continuous boundaries or closed surfaces. In medical image segmentation, the correctness of a segmentation in terms of the required topological genus sometimes is even more important than the pixel-wise accuracy. Existing topology-aware approaches commonly estimate and constrain the topological structure via the concept of persistent homology (PH). However, these methods are difficult to implement for high dimensional data due to their polynomial computational complexity. To overcome this problem, we propose a novel and fast approach for topology-aware segmentation based on the Euler Characteristic (χ). First, we propose a fast formulation for χ computation in both 2D and 3D. The scalar χ error between the prediction and ground-truth serves as the topological evaluation metric. Then we estimate the spatial topology correctness of any segmentation network via a so-called topological violation map, i.e., a detailed map that highlights regions with χ errors. Finally, the segmentation results from the arbitrary network are refined based on the topological violation maps by a topology-aware correction network. Our experiments are conducted on both 2D and 3D datasets and show that our method can significantly improve topological correctness while preserving pixel-wise segmentation accuracy.

Glenning J, Gualtieri L

pubmed logopapersJul 28 2025
Artificial intelligence (AI) is reshaping medical imaging with the promise of improved diagnostic accuracy and efficiency. Yet, its ethical and effective adoption depends not only on technical excellence but also on aligning implementation with patient perspectives. This commentary synthesizes emerging research on how patients perceive AI in radiology, expressing cautious optimism, a desire for transparency, and a strong preference for human oversight. Patients consistently view AI as a supportive tool rather than a replacement for clinicians. We argue that centering patient voices is essential to sustaining trust, preserving the human connection in care, and ensuring that AI serves as a truly patient-centered innovation. The path forward requires participatory approaches, ethical safeguards, and transparent communication to ensure that AI enhances, rather than diminishes, the values patients hold most dear.

Yu W, Guo X, Li W, Liu X, Chen H, Yuan Y

pubmed logopapersJul 28 2025
Generating high-fidelity dental radiographs is essential for training diagnostic models. Despite the development of numerous methods for other medical data, generative approaches in dental radiology remain unexplored. Due to the intricate tooth structures and specialized terminology, these methods often yield ambiguous tooth regions and incorrect dental concepts when applied to dentistry. In this paper, we take the first attempt to investigate diffusion-based teeth X-ray image generation and propose ToothMaker, a novel framework specifically designed for the dental domain. Firstly, to synthesize X-ray images that possess accurate tooth structures and realistic radiological styles simultaneously, we design control-disentangled fine-tuning (CDFT) strategy. Specifically, we present two separate controllers to handle style and layout control respectively, and introduce a gradient-based decoupling method that optimizes each using their corresponding disentangled gradients. Secondly, to enhance model's understanding of dental terminology, we propose prior-disentangled guidance module (PDGM), enabling precise synthesis of dental concepts. It utilizes large language model to decompose dental terminology into a series of meta-knowledge elements and performs interactions and refinements through hypergraph neural network. These elements are then fed into the network to guide the generation of dental concepts. Extensive experiments demonstrate the high fidelity and diversity of the images synthesized by our approach. By incorporating the generated data, we achieve substantial performance improvements on downstream segmentation and visual question answering tasks, indicating that our method can greatly reduce the reliance on manually annotated data. Code will be public available at https://github.com/CUHK-AIM-Group/ToothMaker.

Fotis A, Lalwani N, Gupta P, Yee J

pubmed logopapersJul 28 2025
AI is rapidly transforming abdominal radiology. This scoping review mapped current applications across segmentation, detection, classification, prediction, and workflow optimization based on 432 studies published between 2019 and 2024. Most studies focused on CT imaging, with fewer involving MRI, ultrasound, or X-ray. Segmentation models (e.g., U-Net) performed well in liver and pancreatic imaging (Dice coefficient 0.65-0.90). Classification models (e.g., ResNet, DenseNet) were commonly used for diagnostic labeling, with reported sensitivities ranging from 52 to 100% and specificities from 40.7 to 99%. A small number of studies employed true object detection models (e.g., YOLOv3, YOLOv7, Mask R-CNN) capable of spatial lesion localization, marking an emerging trend toward localization-based AI. Predictive models demonstrated AUCs between 0.62 and 0.99 but often lacked interpretability and external validation. Workflow optimization studies reported improved efficiency (e.g., reduced report turnaround and scan repetition), though standardized benchmarks were often missing. Major gaps identified include limited real-world validation, underuse of non-CT modalities, and unclear regulatory pathways. Successful clinical integration will require robust validation, practical implementation, and interdisciplinary collaboration.

Zhang M, Li Y, Zheng C, Xie F, Zhao Z, Dai F, Wang J, Wu H, Zhu Z, Liu Q, Li Y

pubmed logopapersJul 28 2025
This study aimed to develop a fully automated 3D multi-modal deep learning model using preoperative <sup>18</sup>F-FDG PET/CT to predict the T-stage of colorectal cancer (CRC) and evaluate its clinical utility. A retrospective cohort of 474 CRC patients was included, with 400 patients for internal cohort and 74 patients for external cohort. Patients were classified into early T-stage (T1-T2) and advanced T-stage (T3-T4) groups. Automatic segmentation of the volume of interest (VOI) was achieved based on TotalSegmentator. A 3D ResNet18-based deep learning model integrated with a cross-multi-head attention mechanism was developed. Five models (CT + PET + Clinic (CPC), CT + PET (CP), PET (P), CT (C), Clinic) and two radiologists' assessment were compared. Performance was evaluated using Area Under the Curve (AUC). Grad-CAM was employed to provide visual interpretability of decision-critical regions. The automated segmentation achieved Dice scores of 0.884 (CT) and 0.888 (PET). The CPC and CP models achieved superior performance, with AUCs of 0.869 and 0.869 in the internal validation cohort, respectively, outperforming single-modality models (P: 0.832; C: 0.809; Clinic: 0.728) and the radiologists (AUC: 0.627, P < 0.05 for all models vs. radiologists, except for the Clinical model). External validation exhibited a similar trend, with AUCs of 0.814, 0.812, 0.763, 0.714, 0.663 and 0.704, respectively. Grad-CAM visualization highlighted tumor-centric regions for early T-stage and peri-tumoral tissue infiltration for advanced T-stage. The fully automated multimodal, fusing PET/CT with cross-multi-head-attention, improved T-stage prediction in CRC, surpassing the single-modality models and radiologists, offering a time-efficient tool to aid clinical decision-making.

Choghazardi Y, Tavakoli MB, Abedi I, Roayaei M, Hemati S, Shanei A

pubmed logopapersJul 28 2025
The lower image contrast of megavoltage computed tomography (MVCT), which corresponds to kilovoltage computed tomography (kVCT), can inhibit accurate dosimetric assessments. This study proposes a deep learning approach, specifically the pix2pix network, to generate high-quality synthetic kVCT (skVCT) images from MVCT data. The model was trained on a dataset of 25 paired patient images and evaluated on a test set of 15 paired images. We performed visual inspections to assess the quality of the generated skVCT images and calculated the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Dosimetric equivalence was evaluated by comparing the gamma pass rates of treatment plans derived from skVCT and kVCT images. Results showed that skVCT images exhibited significantly higher quality than MVCT images, with PSNR and SSIM values of 31.9 ± 1.1 dB and 94.8% ± 1.3%, respectively, compared to 26.8 ± 1.7 dB and 89.5% ± 1.5% for MVCT-to-kVCT comparisons. Furthermore, treatment plans based on skVCT images achieved excellent gamma pass rates of 99.78 ± 0.14% and 99.82 ± 0.20% for 2 mm/2% and 3 mm/3% criteria, respectively, comparable to those obtained from kVCT-based plans (99.70 ± 0.31% and 99.79 ± 1.32%). This study demonstrates the potential of pix2pix models for generating high-quality skVCT images, which could significantly enhance Adaptive Radiation Therapy (ART).

Theodoropoulos D, Trivizakis E, Marias K, Xirouchaki N, Vakis A, Papadaki E, Karantanas A, Karabetsos DA

pubmed logopapersJul 28 2025
Elevated intracranial pressure (ICP) is a serious condition that demands prompt diagnosis to avoid significant neurological injury or even death. Although invasive techniques remain the "gold standard" for ICP measuring, they are time-consuming and pose risks of complications. Various noninvasive methods have been suggested, but their experimental status limits their use in emergency situations. On the other hand, although artificial intelligence has rapidly evolved, it has not yet fully harnessed fast-acquisition modalities such as computed tomography (CT) scans to evaluate ICP. This is likely due to the lack of available annotated data sets. In this article, we present research that addresses this gap by training four distinct deep learning models on a custom data set, enhanced with demographical and Glasgow Coma Scale (GCS) values. A key innovation of our study is the incorporation of demographical data and GCS values as additional channels of the scans. The models were trained and validated on a custom data set consisting of paired CT brain scans (n = 578) with corresponding ICP values, supplemented by GCS scores and demographical data. The algorithm addresses a binary classification problem by predicting whether ICP levels exceed a predetermined threshold of 15 mm Hg. The top-performing models achieved an area under the curve of 88.3% and a recall of 81.8%. An algorithm that enhances the transparency of the model's decisions was used to provide insights into where the models focus when generating outcomes, both for the best and lowest-performing models. This study demonstrates the potential of AI-based models to evaluate ICP levels from brain CT scans with high recall. Although promising, further improvements are necessary in the future to validate these findings and improve clinical applicability.
Page 412 of 7477461 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,400+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.