Sort by:
Page 51 of 3053046 results

Evaluation of GPT-4o for multilingual translation of radiology reports across imaging modalities.

Terzis R, Salam B, Nowak S, Mueller PT, Mesropyan N, Oberlinkels L, Efferoth AF, Kravchenko D, Voigt M, Ginzburg D, Pieper CC, Hayawi M, Kuetting D, Afat S, Maintz D, Luetkens JA, Kaya K, Isaak A

pubmed logopapersJul 29 2025
Large language models (LLMs) like GPT-4o offer multilingual and real-time translation capabilities. This study aims to evaluate GPT-4o's effectiveness in translating radiology reports into different languages. In this experimental two-center study, 100 real-world radiology reports from four imaging modalities (X-ray, ultrasound, CT, MRI) were randomly selected and fully anonymized. Reports were translated using GPT-4o with zero-shot prompting from German into four languages including English, French, Spanish, and Russian (n = 400 translations). Eight bilingual radiologists (two per language) evaluated the translations for general readability, overall quality, and utility for translators using 5-point Likert scales (ranging from 5 [best score] to 1 [worst score]). Binary questions (yes/no) were conducted to evaluate potential harmful errors, completeness, and factual correctness. The average processing time of GPT-4o for translating reports ranged from 9 to 24 s. The overall quality of translations achieved a median of 4.5 (IQR 4-5), with English (5 [4,5]), French and Spanish (each 4.5 [4,5]) significantly outperforming Russian (4 [3.5-4]; each p < 0.05). Usefulness for translators was rated highest for English (5 [5-5], p < 0.05 against other languages). Readability scores and translation completeness were significantly higher for translations into Spanish, English and French compared to Russian (each p < 0.05). Factual correctness averaged 79 %, with English (84 %) and French (83 %) outperforming Russian (69 %) (each p < 0.05). Potentially harmful errors were identified in 4 % of translations, primarily in Russian (9 %). GPT-4o demonstrated robust performance in translating radiology reports across multiple languages, with limitations observed in Russian translations.

Multi-Faceted Consistency learning with active cross-labeling for barely-supervised 3D medical image segmentation.

Wu X, Xu Z, Tong RK

pubmed logopapersJul 29 2025
Deep learning-driven 3D medical image segmentation generally necessitates dense voxel-wise annotations, which are expensive and labor-intensive to acquire. Cross-annotation, which labels only a few orthogonal slices per scan, has recently emerged as a cost-effective alternative that better preserves the shape and precise boundaries of the 3D object than traditional weak labeling methods such as bounding boxes and scribbles. However, learning from such sparse labels, referred to as barely-supervised learning (BSL), remains challenging due to less fine-grained object perception, less compact class features and inferior generalizability. To tackle these challenges and foster collaboration between model training and human expertise, we propose a Multi-Faceted ConSistency learning (MF-ConS) framework with a Diversity and Uncertainty Sampling-based Active Learning (DUS-AL) strategy, specifically designed for the active BSL scenario. This framework combines a cross-annotation BSL strategy, where only three orthogonal slices are labeled per scan, with an AL paradigm guided by DUS to direct human-in-the-loop annotation toward the most informative volumes under a fixed budget. Built upon a teacher-student architecture, MF-ConS integrates three complementary consistency regularization modules: (i) neighbor-informed object prediction consistency for advancing fine-grained object perception by encouraging the student model to infer complete segmentation from masked inputs; (ii) prototype-driven consistency, which enhances intra-class compactness and discriminativeness by aligning latent feature and decision spaces using fused prototypes; and (iii) stability constraint that promotes model robustness against input perturbations. Extensive experiments on three benchmark datasets demonstrate that MF-ConS (DUS-AL) consistently outperforms state-of-the-art methods under extremely limited annotation.

Enhancing Synthetic Pelvic CT Generation from CBCT using Vision Transformer with Adaptive Fourier Neural Operators.

Bhaskara R, Oderinde OM

pubmed logopapersJul 28 2025
This study introduces a novel approach to improve Cone Beam CT (CBCT) image quality by developing a synthetic CT (sCT) generation method using CycleGAN with a Vision Transformer (ViT) and an Adaptive Fourier Neural Operator (AFNO). &#xD;&#xD;Approach: A dataset of 20 prostate cancer patients who received stereotactic body radiation therapy (SBRT) was used, consisting of paired CBCT and planning CT (pCT) images. The dataset was preprocessed by registering pCTs to CBCTs using deformation registration techniques, such as B-spline, followed by resampling to uniform voxel sizes and normalization. The model architecture integrates a CycleGAN with bidirectional generators, where the UNet generator is enhanced with a ViT at the bottleneck. AFNO functions as the attention mechanism for the ViT, operating on the input data in the Fourier domain. AFNO's innovations handle varying resolutions, mesh invariance, and efficient long-range dependency capture.&#xD;&#xD;Main Results: Our model improved significantly in preserving anatomical details and capturing complex image dependencies. The AFNO mechanism processed global image information effectively, adapting to interpatient variations for accurate sCT generation. Evaluation metrics like Mean Absolute Error (MAE), Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Normalized Cross Correlation (NCC), demonstrated the superiority of our method. Specifically, the model achieved an MAE of 9.71, PSNR of 37.08 dB, SSIM of 0.97, and NCC of 0.99, confirming its efficacy. &#xD;&#xD;Significance: The integration of AFNO within the CycleGAN UNet framework addresses Cone Beam CT image quality limitations. The model generates synthetic CTs that allow adaptive treatment planning during SBRT, enabling adjustments to the dose based on tumor response, thus reducing radiotoxicity from increased doses. This method's ability to preserve both global and local anatomical features shows potential for improving tumor targeting, adaptive radiotherapy planning, and clinical decision-making.

Topology Optimization in Medical Image Segmentation with Fast χ Euler Characteristic.

Li L, Ma Q, Oyang C, Paetzold JC, Rueckert D, Kainz B

pubmed logopapersJul 28 2025
Deep learning-based medical image segmentation techniques have shown promising results when evaluated based on conventional metrics such as the Dice score or Intersection-over-Union. However, these fully automatic methods often fail to meet clinically acceptable accuracy, especially when topological constraints should be observed, e.g., continuous boundaries or closed surfaces. In medical image segmentation, the correctness of a segmentation in terms of the required topological genus sometimes is even more important than the pixel-wise accuracy. Existing topology-aware approaches commonly estimate and constrain the topological structure via the concept of persistent homology (PH). However, these methods are difficult to implement for high dimensional data due to their polynomial computational complexity. To overcome this problem, we propose a novel and fast approach for topology-aware segmentation based on the Euler Characteristic (χ). First, we propose a fast formulation for χ computation in both 2D and 3D. The scalar χ error between the prediction and ground-truth serves as the topological evaluation metric. Then we estimate the spatial topology correctness of any segmentation network via a so-called topological violation map, i.e., a detailed map that highlights regions with χ errors. Finally, the segmentation results from the arbitrary network are refined based on the topological violation maps by a topology-aware correction network. Our experiments are conducted on both 2D and 3D datasets and show that our method can significantly improve topological correctness while preserving pixel-wise segmentation accuracy.

Patient Perspectives on Artificial Intelligence in Medical Imaging.

Glenning J, Gualtieri L

pubmed logopapersJul 28 2025
Artificial intelligence (AI) is reshaping medical imaging with the promise of improved diagnostic accuracy and efficiency. Yet, its ethical and effective adoption depends not only on technical excellence but also on aligning implementation with patient perspectives. This commentary synthesizes emerging research on how patients perceive AI in radiology, expressing cautious optimism, a desire for transparency, and a strong preference for human oversight. Patients consistently view AI as a supportive tool rather than a replacement for clinicians. We argue that centering patient voices is essential to sustaining trust, preserving the human connection in care, and ensuring that AI serves as a truly patient-centered innovation. The path forward requires participatory approaches, ethical safeguards, and transparent communication to ensure that AI enhances, rather than diminishes, the values patients hold most dear.

ToothMaker: Realistic Panoramic Dental Radiograph Generation via Disentangled Control.

Yu W, Guo X, Li W, Liu X, Chen H, Yuan Y

pubmed logopapersJul 28 2025
Generating high-fidelity dental radiographs is essential for training diagnostic models. Despite the development of numerous methods for other medical data, generative approaches in dental radiology remain unexplored. Due to the intricate tooth structures and specialized terminology, these methods often yield ambiguous tooth regions and incorrect dental concepts when applied to dentistry. In this paper, we take the first attempt to investigate diffusion-based teeth X-ray image generation and propose ToothMaker, a novel framework specifically designed for the dental domain. Firstly, to synthesize X-ray images that possess accurate tooth structures and realistic radiological styles simultaneously, we design control-disentangled fine-tuning (CDFT) strategy. Specifically, we present two separate controllers to handle style and layout control respectively, and introduce a gradient-based decoupling method that optimizes each using their corresponding disentangled gradients. Secondly, to enhance model's understanding of dental terminology, we propose prior-disentangled guidance module (PDGM), enabling precise synthesis of dental concepts. It utilizes large language model to decompose dental terminology into a series of meta-knowledge elements and performs interactions and refinements through hypergraph neural network. These elements are then fed into the network to guide the generation of dental concepts. Extensive experiments demonstrate the high fidelity and diversity of the images synthesized by our approach. By incorporating the generated data, we achieve substantial performance improvements on downstream segmentation and visual question answering tasks, indicating that our method can greatly reduce the reliance on manually annotated data. Code will be public available at https://github.com/CUHK-AIM-Group/ToothMaker.

From promise to practice: a scoping review of AI applications in abdominal radiology.

Fotis A, Lalwani N, Gupta P, Yee J

pubmed logopapersJul 28 2025
AI is rapidly transforming abdominal radiology. This scoping review mapped current applications across segmentation, detection, classification, prediction, and workflow optimization based on 432 studies published between 2019 and 2024. Most studies focused on CT imaging, with fewer involving MRI, ultrasound, or X-ray. Segmentation models (e.g., U-Net) performed well in liver and pancreatic imaging (Dice coefficient 0.65-0.90). Classification models (e.g., ResNet, DenseNet) were commonly used for diagnostic labeling, with reported sensitivities ranging from 52 to 100% and specificities from 40.7 to 99%. A small number of studies employed true object detection models (e.g., YOLOv3, YOLOv7, Mask R-CNN) capable of spatial lesion localization, marking an emerging trend toward localization-based AI. Predictive models demonstrated AUCs between 0.62 and 0.99 but often lacked interpretability and external validation. Workflow optimization studies reported improved efficiency (e.g., reduced report turnaround and scan repetition), though standardized benchmarks were often missing. Major gaps identified include limited real-world validation, underuse of non-CT modalities, and unclear regulatory pathways. Successful clinical integration will require robust validation, practical implementation, and interdisciplinary collaboration.

Fully automated 3D multi-modal deep learning model for preoperative T-stage prediction of colorectal cancer using <sup>18</sup>F-FDG PET/CT.

Zhang M, Li Y, Zheng C, Xie F, Zhao Z, Dai F, Wang J, Wu H, Zhu Z, Liu Q, Li Y

pubmed logopapersJul 28 2025
This study aimed to develop a fully automated 3D multi-modal deep learning model using preoperative <sup>18</sup>F-FDG PET/CT to predict the T-stage of colorectal cancer (CRC) and evaluate its clinical utility. A retrospective cohort of 474 CRC patients was included, with 400 patients for internal cohort and 74 patients for external cohort. Patients were classified into early T-stage (T1-T2) and advanced T-stage (T3-T4) groups. Automatic segmentation of the volume of interest (VOI) was achieved based on TotalSegmentator. A 3D ResNet18-based deep learning model integrated with a cross-multi-head attention mechanism was developed. Five models (CT + PET + Clinic (CPC), CT + PET (CP), PET (P), CT (C), Clinic) and two radiologists' assessment were compared. Performance was evaluated using Area Under the Curve (AUC). Grad-CAM was employed to provide visual interpretability of decision-critical regions. The automated segmentation achieved Dice scores of 0.884 (CT) and 0.888 (PET). The CPC and CP models achieved superior performance, with AUCs of 0.869 and 0.869 in the internal validation cohort, respectively, outperforming single-modality models (P: 0.832; C: 0.809; Clinic: 0.728) and the radiologists (AUC: 0.627, P < 0.05 for all models vs. radiologists, except for the Clinical model). External validation exhibited a similar trend, with AUCs of 0.814, 0.812, 0.763, 0.714, 0.663 and 0.704, respectively. Grad-CAM visualization highlighted tumor-centric regions for early T-stage and peri-tumoral tissue infiltration for advanced T-stage. The fully automated multimodal, fusing PET/CT with cross-multi-head-attention, improved T-stage prediction in CRC, surpassing the single-modality models and radiologists, offering a time-efficient tool to aid clinical decision-making.

Dosimetric evaluation of synthetic kilo-voltage CT images generated from megavoltage CT for head and neck tomotherapy using a conditional GAN network.

Choghazardi Y, Tavakoli MB, Abedi I, Roayaei M, Hemati S, Shanei A

pubmed logopapersJul 28 2025
The lower image contrast of megavoltage computed tomography (MVCT), which corresponds to kilovoltage computed tomography (kVCT), can inhibit accurate dosimetric assessments. This study proposes a deep learning approach, specifically the pix2pix network, to generate high-quality synthetic kVCT (skVCT) images from MVCT data. The model was trained on a dataset of 25 paired patient images and evaluated on a test set of 15 paired images. We performed visual inspections to assess the quality of the generated skVCT images and calculated the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Dosimetric equivalence was evaluated by comparing the gamma pass rates of treatment plans derived from skVCT and kVCT images. Results showed that skVCT images exhibited significantly higher quality than MVCT images, with PSNR and SSIM values of 31.9 ± 1.1 dB and 94.8% ± 1.3%, respectively, compared to 26.8 ± 1.7 dB and 89.5% ± 1.5% for MVCT-to-kVCT comparisons. Furthermore, treatment plans based on skVCT images achieved excellent gamma pass rates of 99.78 ± 0.14% and 99.82 ± 0.20% for 2 mm/2% and 3 mm/3% criteria, respectively, comparable to those obtained from kVCT-based plans (99.70 ± 0.31% and 99.79 ± 1.32%). This study demonstrates the potential of pix2pix models for generating high-quality skVCT images, which could significantly enhance Adaptive Radiation Therapy (ART).

Predicting Intracranial Pressure Levels: A Deep Learning Approach Using Computed Tomography Brain Scans.

Theodoropoulos D, Trivizakis E, Marias K, Xirouchaki N, Vakis A, Papadaki E, Karantanas A, Karabetsos DA

pubmed logopapersJul 28 2025
Elevated intracranial pressure (ICP) is a serious condition that demands prompt diagnosis to avoid significant neurological injury or even death. Although invasive techniques remain the "gold standard" for ICP measuring, they are time-consuming and pose risks of complications. Various noninvasive methods have been suggested, but their experimental status limits their use in emergency situations. On the other hand, although artificial intelligence has rapidly evolved, it has not yet fully harnessed fast-acquisition modalities such as computed tomography (CT) scans to evaluate ICP. This is likely due to the lack of available annotated data sets. In this article, we present research that addresses this gap by training four distinct deep learning models on a custom data set, enhanced with demographical and Glasgow Coma Scale (GCS) values. A key innovation of our study is the incorporation of demographical data and GCS values as additional channels of the scans. The models were trained and validated on a custom data set consisting of paired CT brain scans (n = 578) with corresponding ICP values, supplemented by GCS scores and demographical data. The algorithm addresses a binary classification problem by predicting whether ICP levels exceed a predetermined threshold of 15 mm Hg. The top-performing models achieved an area under the curve of 88.3% and a recall of 81.8%. An algorithm that enhances the transparency of the model's decisions was used to provide insights into where the models focus when generating outcomes, both for the best and lowest-performing models. This study demonstrates the potential of AI-based models to evaluate ICP levels from brain CT scans with high recall. Although promising, further improvements are necessary in the future to validate these findings and improve clinical applicability.
Page 51 of 3053046 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.