Sort by:
Page 10 of 34334 results

Depth-Sequence Transformer (DST) for Segment-Specific ICA Calcification Mapping on Non-Contrast CT

Xiangjian Hou, Ebru Yaman Akcicek, Xin Wang, Kazem Hashemizadeh, Scott Mcnally, Chun Yuan, Xiaodong Ma

arxiv logopreprintJul 10 2025
While total intracranial carotid artery calcification (ICAC) volume is an established stroke biomarker, growing evidence shows this aggregate metric ignores the critical influence of plaque location, since calcification in different segments carries distinct prognostic and procedural risks. However, a finer-grained, segment-specific quantification has remained technically infeasible. Conventional 3D models are forced to process downsampled volumes or isolated patches, sacrificing the global context required to resolve anatomical ambiguity and render reliable landmark localization. To overcome this, we reformulate the 3D challenge as a \textbf{Parallel Probabilistic Landmark Localization} task along the 1D axial dimension. We propose the \textbf{Depth-Sequence Transformer (DST)}, a framework that processes full-resolution CT volumes as sequences of 2D slices, learning to predict $N=6$ independent probability distributions that pinpoint key anatomical landmarks. Our DST framework demonstrates exceptional accuracy and robustness. Evaluated on a 100-patient clinical cohort with rigorous 5-fold cross-validation, it achieves a Mean Absolute Error (MAE) of \textbf{0.1 slices}, with \textbf{96\%} of predictions falling within a $\pm1$ slice tolerance. Furthermore, to validate its architectural power, the DST backbone establishes the best result on the public Clean-CC-CCII classification benchmark under an end-to-end evaluation protocol. Our work delivers the first practical tool for automated segment-specific ICAC analysis. The proposed framework provides a foundation for further studies on the role of location-specific biomarkers in diagnosis, prognosis, and procedural planning. Our code will be made publicly available.

Acute Management of Nasal Bone Fractures: A Systematic Review and Practice Management Guideline.

Paliwoda ED, Newman-Plotnick H, Buzzetta AJ, Post NK, LaClair JR, Trandafirescu M, Gildener-Leapman N, Kpodzo DS, Edwards K, Tafen M, Schalet BJ

pubmed logopapersJul 10 2025
Nasal bone fractures represent the most common facial skeletal injury, challenging both function and aesthetics. This Preferred Reporting Items for Systematic Reviews and Meta-Analyses-based review analyzed 23 studies published within the past 5 years, selected from 998 records retrieved from PubMed, Embase, and Web of Science. Data from 1780 participants were extracted, focusing on diagnostic methods, surgical techniques, anesthesia protocols, and long-term outcomes. Ultrasound and artificial intelligence-based algorithms improved diagnostic accuracy, while telephone triage streamlined necessary encounters. Navigation-assisted reduction, ballooning, and septal reduction with polydioxanone plates improved outcomes. Anesthetic approaches ranged from local nerve blocks to general anesthesia with intraoperative administration of lidocaine, alongside techniques to manage pain from nasal pack removal postoperatively. Long-term follow-up demonstrated improved quality of life, breathing function, and aesthetic satisfaction with timely and individualized treatment. This review highlights the trend toward personalized, technology-assisted approaches in nasal fracture management, highlighting areas for future research.

Patient-specific vs Multi-Patient Vision Transformer for Markerless Tumor Motion Forecasting

Gauthier Rotsart de Hertaing, Dani Manjah, Benoit Macq

arxiv logopreprintJul 10 2025
Background: Accurate forecasting of lung tumor motion is essential for precise dose delivery in proton therapy. While current markerless methods mostly rely on deep learning, transformer-based architectures remain unexplored in this domain, despite their proven performance in trajectory forecasting. Purpose: This work introduces a markerless forecasting approach for lung tumor motion using Vision Transformers (ViT). Two training strategies are evaluated under clinically realistic constraints: a patient-specific (PS) approach that learns individualized motion patterns, and a multi-patient (MP) model designed for generalization. The comparison explicitly accounts for the limited number of images that can be generated between planning and treatment sessions. Methods: Digitally reconstructed radiographs (DRRs) derived from planning 4DCT scans of 31 patients were used to train the MP model; a 32nd patient was held out for evaluation. PS models were trained using only the target patient's planning data. Both models used 16 DRRs per input and predicted tumor motion over a 1-second horizon. Performance was assessed using Average Displacement Error (ADE) and Final Displacement Error (FDE), on both planning (T1) and treatment (T2) data. Results: On T1 data, PS models outperformed MP models across all training set sizes, especially with larger datasets (up to 25,000 DRRs, p < 0.05). However, MP models demonstrated stronger robustness to inter-fractional anatomical variability and achieved comparable performance on T2 data without retraining. Conclusions: This is the first study to apply ViT architectures to markerless tumor motion forecasting. While PS models achieve higher precision, MP models offer robust out-of-the-box performance, well-suited for time-constrained clinical settings.

Deformable detection transformers for domain adaptable ultrasound localization microscopy with robustness to point spread function variations.

Gharamaleki SK, Helfield B, Rivaz H

pubmed logopapersJul 10 2025
Super-resolution imaging has emerged as a rapidly advancing field in diagnostic ultrasound. Ultrasound Localization Microscopy (ULM) achieves sub-wavelength precision in microvasculature imaging by tracking gas microbubbles (MBs) flowing through blood vessels. However, MB localization faces challenges due to dynamic point spread functions (PSFs) caused by harmonic and sub-harmonic emissions, as well as depth-dependent PSF variations in ultrasound imaging. Additionally, deep learning models often struggle to generalize from simulated to in vivo data due to significant disparities between the two domains. To address these issues, we propose a novel approach using the DEformable DEtection TRansformer (DE-DETR). This object detection network tackles object deformations by utilizing multi-scale feature maps and incorporating a deformable attention module. We further refine the super-resolution map by employing a KDTree algorithm for efficient MB tracking across consecutive frames. We evaluated our method using both simulated and in vivo data, demonstrating improved precision and recall compared to current state-of-the-art methodologies. These results highlight the potential of our approach to enhance ULM performance in clinical applications.

Artificial Intelligence for Low-Dose CT Lung Cancer Screening: Comparison of Utilization Scenarios.

Lee M, Hwang EJ, Lee JH, Nam JG, Lim WH, Park H, Park CM, Choi H, Park J, Goo JM

pubmed logopapersJul 10 2025
<b>BACKGROUND</b>. Artificial intelligence (AI) tools for evaluating low-dose CT (LDCT) lung cancer screening examinations are used predominantly for assisting radiologists' interpretations. Alternate utilization scenarios (e.g., use of AI as a prescreener or backup) warrant consideration. <b>OBJECTIVE</b>. The purpose of this study was to evaluate the impact of different AI utilization scenarios on diagnostic outcomes and interpretation times for LDCT lung cancer screening. <b>METHODS</b>. This retrospective study included 366 individuals (358 men, 8 women; mean age, 64 years) who underwent LDCT from May 2017 to December 2017 as part of an earlier prospective lung cancer screening trial. Examinations were interpreted by one of five readers, who reviewed their assigned cases in two sessions (with and without a commercial AI computer-aided detection tool). These interpretations were used to reconstruct simulated AI utilization scenarios: as an assistant (i.e., radiologists interpret all examinations with AI assistance), as a prescreener (i.e., radiologists only interpret examinations with a positive AI result), or as backup (i.e., radiologists reinterpret examinations when AI suggests a missed finding). A group of thoracic radiologists determined the reference standard. Diagnostic outcomes and mean interpretation times were assessed. Decision-curve analysis was performed. <b>RESULTS</b>. Compared with interpretation without AI (recall rate, 22.1%; per-nodule sensitivity, 64.2%; per-examination specificity, 88.8%; mean interpretation time, 164 seconds), AI as an assistant showed higher recall rate (30.3%; <i>p</i> < .001), lower per-examination specificity (81.1%), and no significant change in per-nodule sensitivity (64.8%; <i>p</i> = .86) or mean interpretation time (161 seconds; <i>p</i> = .48); AI as a prescreener showed lower recall rate (20.8%; <i>p</i> = .02) and mean interpretation time (143 seconds; <i>p</i> = .001), higher per-examination specificity (90.3%; <i>p</i> = .04), and no significant difference in per-nodule sensitivity (62.9%; <i>p</i> = .16); and AI as a backup showed increased recall rate (33.6%; <i>p</i> < .001), per-examination sensitivity (66.4%; <i>p</i> < .001), and mean interpretation time (225 seconds; <i>p</i> = .001), with lower per-examination specificity (79.9%; <i>p</i> < .001). Among scenarios, only AI as a prescreener demonstrated higher net benefit than interpretation without AI; AI as an assistant had the least net benefit. <b>CONCLUSION</b>. Different AI implementation approaches yield varying outcomes. The findings support use of AI as a prescreener as the preferred scenario. <b>CLINICAL IMPACT</b>. An approach whereby radiologists only interpret LDCT examinations with a positive AI result can reduce radiologists' workload while preserving sensitivity.

Hierarchical deep learning system for orbital fracture detection and trap-door classification on CT images.

Oku H, Nakamura Y, Kanematsu Y, Akagi A, Kinoshita S, Sotozono C, Koizumi N, Watanabe A, Okumura N

pubmed logopapersJul 10 2025
To develop and evaluate a hierarchical deep learning system that detects orbital fractures on computed tomography (CT) images and classifies them as depressed or trap-door types. A retrospective diagnostic accuracy study analyzing CT images from patients with confirmed orbital fractures. We collected CT images from 686 patients with orbital fractures treated at a single institution (2010-2025), resulting in 46,013 orbital CT slices. After preprocessing, 7809 slices were selected as regions of interest and partitioned into training (6508 slices) and test (1301 slices) datasets. Our hierarchical approach consisted of a first-stage classifier (YOLOv8) for fracture detection and a second-stage classifier (Vision Transformer) for distinguishing depressed from trap-door fractures. Performance was evaluated at both slice and patient levels, focusing on accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC-ROC) at both slice and patient levels. For fracture detection, YOLOv8 achieved a slice-level sensitivity of 80.4 % and specificity of 79.2 %, with patient-level performance improving to 94.7 % sensitivity and 90.0 % specificity. For fracture classification, Vision Transformer demonstrated a slice-level sensitivity of 91.5 % and specificity of 83.5 % for trap-door and depressed fractures, with patient-level metrics of 100 % sensitivity and 88.9 % specificity. The complete system correctly identified 18/20 no-fracture cases, 35/40 depressed fracture cases, and 15/17 trap-door fracture cases. Our hierarchical deep learning system effectively detects orbital fractures and distinguishes between depressed and trap-door types with high accuracy. This approach could aid in the timely identification of trap-door fractures requiring urgent surgical intervention, particularly in settings lacking specialized expertise.

Attend-and-Refine: Interactive keypoint estimation and quantitative cervical vertebrae analysis for bone age assessment

Jinhee Kim, Taesung Kim, Taewoo Kim, Dong-Wook Kim, Byungduk Ahn, Yoon-Ji Kim, In-Seok Song, Jaegul Choo

arxiv logopreprintJul 10 2025
In pediatric orthodontics, accurate estimation of growth potential is essential for developing effective treatment strategies. Our research aims to predict this potential by identifying the growth peak and analyzing cervical vertebra morphology solely through lateral cephalometric radiographs. We accomplish this by comprehensively analyzing cervical vertebral maturation (CVM) features from these radiographs. This methodology provides clinicians with a reliable and efficient tool to determine the optimal timings for orthodontic interventions, ultimately enhancing patient outcomes. A crucial aspect of this approach is the meticulous annotation of keypoints on the cervical vertebrae, a task often challenged by its labor-intensive nature. To mitigate this, we introduce Attend-and-Refine Network (ARNet), a user-interactive, deep learning-based model designed to streamline the annotation process. ARNet features Interaction-guided recalibration network, which adaptively recalibrates image features in response to user feedback, coupled with a morphology-aware loss function that preserves the structural consistency of keypoints. This novel approach substantially reduces manual effort in keypoint identification, thereby enhancing the efficiency and accuracy of the process. Extensively validated across various datasets, ARNet demonstrates remarkable performance and exhibits wide-ranging applicability in medical imaging. In conclusion, our research offers an effective AI-assisted diagnostic tool for assessing growth potential in pediatric orthodontics, marking a significant advancement in the field.

Dataset and Benchmark for Enhancing Critical Retained Foreign Object Detection

Yuli Wang, Victoria R. Shi, Liwei Zhou, Richard Chin, Yuwei Dai, Yuanyun Hu, Cheng-Yi Li, Haoyue Guan, Jiashu Cheng, Yu Sun, Cheng Ting Lin, Ihab Kamel, Premal Trivedi, Pamela Johnson, John Eng, Harrison Bai

arxiv logopreprintJul 9 2025
Critical retained foreign objects (RFOs), including surgical instruments like sponges and needles, pose serious patient safety risks and carry significant financial and legal implications for healthcare institutions. Detecting critical RFOs using artificial intelligence remains challenging due to their rarity and the limited availability of chest X-ray datasets that specifically feature critical RFOs cases. Existing datasets only contain non-critical RFOs, like necklace or zipper, further limiting their utility for developing clinically impactful detection algorithms. To address these limitations, we introduce "Hopkins RFOs Bench", the first and largest dataset of its kind, containing 144 chest X-ray images of critical RFO cases collected over 18 years from the Johns Hopkins Health System. Using this dataset, we benchmark several state-of-the-art object detection models, highlighting the need for enhanced detection methodologies for critical RFO cases. Recognizing data scarcity challenges, we further explore image synthetic methods to bridge this gap. We evaluate two advanced synthetic image methods, DeepDRR-RFO, a physics-based method, and RoentGen-RFO, a diffusion-based method, for creating realistic radiographs featuring critical RFOs. Our comprehensive analysis identifies the strengths and limitations of each synthetic method, providing insights into effectively utilizing synthetic data to enhance model training. The Hopkins RFOs Bench and our findings significantly advance the development of reliable, generalizable AI-driven solutions for detecting critical RFOs in clinical chest X-rays.

A novel segmentation-based deep learning model for enhanced scaphoid fracture detection.

Bützow A, Anttila TT, Haapamäki V, Ryhänen J

pubmed logopapersJul 9 2025
To develop a deep learning model to detect apparent and occult scaphoid fractures from plain wrist radiographs and to compare the model's diagnostic performance with that of a group of experts. A dataset comprising 408 patients, 410 wrists, and 1011 radiographs was collected. 718 of these radiographs contained a scaphoid fracture, verified by magnetic resonance imaging or computed tomography scans. 58 of these fractures were occult. The images were divided into training, test, and occult fracture test sets. The images were annotated by marking the scaphoid bone and the possible fracture area. The performance of the developed DL model was compared with the ground truth and the assessments of three clinical experts. The DL model achieved a sensitivity of 0.86 (95 % CI: 0.75-0.93) and a specificity of 0.83 (0.64-0.94). The model's accuracy was 0.85 (0.76-0.92), and the area under the receiver operating characteristics curve was 0.92 (0.86-0.97). The clinical experts' sensitivity ranged from 0.77 to 0.89, and specificity from 0.83 to 0.97. The DL model detected 24 of 58 (41 %) occult fractures, compared to 10.3 %, 13.7 %, and 6.8 % by the clinical experts. Detecting scaphoid fractures using a segmentation-based DL model is feasible and comparable to previously developed DL models. The model performed similarly to a group of experts in identifying apparent scaphoid fractures and demonstrated higher diagnostic accuracy in detecting occult fractures. The improvement in occult fracture detection could enhance patient care.

Automated Detection of Focal Bone Marrow Lesions From MRI: A Multi-center Feasibility Study in Patients with Monoclonal Plasma Cell Disorders.

Wennmann M, Kächele J, von Salomon A, Nonnenmacher T, Bujotzek M, Xiao S, Martinez Mora A, Hielscher T, Hajiyianni M, Menis E, Grözinger M, Bauer F, Riebl V, Rotkopf LT, Zhang KS, Afat S, Besemer B, Hoffmann M, Ringelstein A, Graeven U, Fedders D, Hänel M, Antoch G, Fenk R, Mahnken AH, Mann C, Mokry T, Raab MS, Weinhold N, Mai EK, Goldschmidt H, Weber TF, Delorme S, Neher P, Schlemmer HP, Maier-Hein K

pubmed logopapersJul 9 2025
To train and test an AI-based algorithm for automated detection of focal bone marrow lesions (FL) from MRI. This retrospective feasibility study included 444 patients with monoclonal plasma cell disorders. For this feasibility study, only FLs in the left pelvis were included. Using the nnDetection framework, the algorithm was trained based on 334 patients with 494 FLs from center 1, and was tested on an internal test set (36 patients, 89 FLs, center 1) and a multicentric external test set (74 patients, 262 FLs, centers 2-11). Mean average precision (mAP), F1-score, sensitivity, positive predictive value (PPV), and Spearman correlation coefficient between automatically determined and actual number of FLs were calculated. On the internal/external test set, the algorithm achieved a mAP of 0.44/0.34, F1-Score of 0.54/0.44, sensitivity of 0.49/0.34, and a PPV of 0.61/0.61, respectively. In two subsets of the external multicentric test set with high imaging quality, the performance nearly matched that of the internal test set, with mAP of 0.45/0.41, F1-Score of 0.50/0.53, sensitivity of 0.44/0.43, and a PPV of 0.60/0.71, respectively. There was a significant correlation between the automatically determined and actual number of FLs on both the internal (r=0.51, p=0.001) and external multicentric test set (r=0.59, p<0.001). This study demonstrates that the automated detection of FLs from MRI, and thereby the automated assessment of the number of FLs, is feasible.
Page 10 of 34334 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.