Sort by:
Page 56 of 3053046 results

Innovations in gender affirmation: AI-enhanced surgical guides for mandibular facial feminization surgery.

Beyer M, Abazi S, Tourbier C, Burde A, Vinayahalingam S, Ileșan RR, Thieringer FM

pubmed logopapersJul 25 2025
This study presents a fully automated digital workflow using artificial intelligence (AI) to create patient-specific cutting guides for mandible-angle osteotomies in facial feminization surgery (FFS). The goal is to achieve predictable, accurate, and safe results with minimal user input, addressing the time and effort required for conventional guide creation. Three-dimensional CT images of 30 male patients were used to develop and validate a workflow that automates two key processes: (1) segmentation of the mandible using a convolutional neural network (3D U-Net architecture) and (2) virtual design of osteotomy-specific cutting guides. Segmentation accuracy was assessed through comparison with expert manual segmentations using the dice similarity coefficient (DSC) and mean surface distance (MSD). The precision of the cutting guides was evaluated based on osteotomy line accuracy and fit. Workflow efficiency was measured by comparing the time required for automated versus manual planning by expert and novice users. The AI-based workflow achieved a median DSC of 0.966 and a median MSD of 0.212 mm, demonstrating high accuracy. The median planning time was reduced to 1 min and 38 s with the automated system, compared to 19 min and 37 s for an expert and 26 min and 39 s for a novice, representing 10- and 16-fold time reductions, respectively. The AI-based workflow is accurate, efficient, and cost-effective, significantly reducing planning time while maintaining clinical precision. This workflow improves surgical outcomes with precise and reliable cutting guides, enhancing efficiency and accessibility for clinicians, including those with limited experience in designing cutting guides.

Methinks AI software for identifying large vessel occlusion in non-contrast head CT: A pilot retrospective study in American population.

Sanders JV, Keigher K, Oliver M, Joshi K, Lopes D

pubmed logopapersJul 25 2025
BackgroundNon-contrast computed tomography (NCCT) is the first image for stroke assessment, but its sensitivity for detecting large vessel occlusion (LVO) is limited. Artificial intelligence (AI) algorithms may contribute to a faster LVO diagnosis using only NCCT. This study evaluates the performance and the potential diagnostic time saving of Methinks LVO AI algorithm in a U.S. multi-facility stroke network.MethodsThis retrospective pilot study reviewed NCCT and computed tomography angiography (CTA) images between 2015 and 2023. The Methinks AI algorithm, designed to detect LVOs in the internal carotid artery and middle cerebral artery, was tested for sensitivity, specificity, and predictive values. A neuroradiologist reviewed cases to establish a gold standard. To evaluate potential time saving in workflow, time gaps between NCCT and CTA were analyzed and stratified into four groups in true positive cases: Group 1 (<10 min), Group 2 (10-30 min), Group 3 (30-60 min), and Group 4 (>60 min).ResultsFrom a total of 1155 stroke codes, 608 NCCT exams were analyzed. Methinks LVO demonstrated 75% sensitivity and 83% specificity, identifying 146 out of 194 confirmed LVO cases correctly. The PPV of the algorithm was 72%. The NPV was 83% (considering 'other occlusion', 'stenosis' and 'posteriors' as negatives), and 73% considered the same conditions as positives. Among the true positive cases, we found 112 patients Group 1, 32 patients in Group 2, 15 patients in Group 3, 3 patients in Group 4.ConclusionThe Methinks AI algorithm shows promise for improving LVO detection from NCCT, especially in resource limited settings. However, its sensitivity remains lower than CTA-based systems, suggesting the need for further refinement.

Carotid and femoral bifurcation plaques detected by ultrasound as predictors of cardiovascular events.

Blinc A, Nicolaides AN, Poredoš P, Paraskevas KI, Heiss C, Müller O, Rammos C, Stanek A, Jug B

pubmed logopapersJul 25 2025
<b></b>Risk factor-based algorithms give a good estimate of cardiovascular (CV) risk at the population level but are often inaccurate at the individual level. Detecting preclinical atherosclerotic plaques in the carotid and common femoral arterial bifurcations by ultrasound is a simple, non-invasive way of detecting atherosclerosis in the individual and thus more accurately estimating his/her risk of future CV events. The presence of plaques in these bifurcations is independently associated with increased risk of CV death and myocardial infarction, even after adjusting for traditional risk factors, while ultrasonographic characteristics of vulnerable plaque are mostly associated with increased risk for ipsilateral ischaemic stroke. The predictive value of carotid and femoral plaques for CV events increases in proportion to plaque burden and especially by plaque progression over time. Assessing the burden of carotid and/or common femoral bifurcation plaques enables reclassification of a significant number of individuals with low risk according risk factor-based algorithms into intermediate or high CV risk and intermediate risk individuals into the low- or high CV risk. Ongoing multimodality imaging studies, supplemented by clinical and genetic data, aided by machine learning/ artificial intelligence analysis are expected to advance our understanding of atherosclerosis progression from the asymptomatic into the symptomatic phase and personalize prevention.

Advances and challenges in AI-assisted MRI for lumbar disc degeneration detection and classification.

Zhao P, Zhu S

pubmed logopapersJul 25 2025
Intervertebral disc degeneration (IDD) is a major contributor to chronic low back pain. Magnetic resonance imaging (MRI) serves as the gold standard for IDD assessment, yet manual grading is often subjective and inconsistent. With advances in artificial intelligence (AI), particularly deep learning, automated detection and classification of IDD from MRI has become increasingly feasible. This narrative review aims to provide a comprehensive overview of AI applications-especially machine learning and deep learning techniques-for MRI-based detection and grading of lumbar disc degeneration, highlighting their clinical value, current limitations, and future directions. Relevant studies were reviewed and summarized based on thematic structure. The review covers classical methods (e.g., support vector machines), deep learning models (e.g., CNNs, SpineNet, ResNet, U-Net), and hybrid approaches incorporating transformers and multitask learning. Technical details, model architectures, performance metrics, and representative datasets were synthesized and discussed. AI systems have demonstrated promising performance in automatic IDD grading, in some cases matching or surpassing expert radiologists. CNN-based models showed high accuracy and reproducibility, while hybrid models further enhanced segmentation and classification tasks. However, challenges remain in generalizability, data imbalance, interpretability, and regulatory integration. Tools such as Grad-CAM and SHAP improve model transparency, while methods like few-shot learning and data augmentation can alleviate data limitations. AI-assisted analysis of MRI for lumbar disc degeneration offers significant potential to enhance diagnostic efficiency and consistency. While current models are encouraging, real-world clinical implementation requires further advancements in interpretability, data diversity, ethical standards, and large-scale validation.

Exploring AI-Based System Design for Pixel-Level Protected Health Information Detection in Medical Images.

Truong T, Baltruschat IM, Klemens M, Werner G, Lenga M

pubmed logopapersJul 25 2025
De-identification of medical images is a critical step to ensure privacy during data sharing in research and clinical settings. The initial step in this process involves detecting Protected Health Information (PHI), which can be found in image metadata or imprinted within image pixels. Despite the importance of such systems, there has been limited evaluation of existing AI-based solutions, creating barriers to the development of reliable and robust tools. In this study, we present an AI-based pipeline for PHI detection, comprising three key modules: text detection, text extraction, and text analysis. We benchmark three models-YOLOv11, EasyOCR, and GPT-4o- across different setups corresponding to these modules, evaluating their performance on two different datasets encompassing multiple imaging modalities and PHI categories. Our findings indicate that the optimal setup involves utilizing dedicated vision and language models for each module, which achieves a commendable balance in performance, latency, and cost associated with the usage of large language models (LLMs). Additionally, we show that the application of LLMs not only involves identifying PHI content but also enhances OCR tasks and facilitates an end-to-end PHI detection pipeline, showcasing promising outcomes through our analysis.

Automatic Prediction of TMJ Disc Displacement in CBCT Images Using Machine Learning.

Choi H, Jeon KJ, Lee C, Choi YJ, Jo GD, Han SS

pubmed logopapersJul 25 2025
Magnetic resonance imaging (MRI) is the gold standard for diagnosing disc displacement in temporomandibular joint (TMJ) disorders, but its high cost and practical challenges limit its accessibility. This study aimed to develop a machine learning (ML) model that can predict TMJ disc displacement using only cone-beam computed tomography (CBCT)-based radiomics features without MRI. CBCT images of 247 mandibular condyles from 134 patients who also underwent MRI scans were analyzed. To conduct three experiments based on the classification of various patient groups, we trained two ML models, random forest (RF) and extreme gradient boosting (XGBoost). Experiment 1 classified the data into three groups: Normal, disc displacement with reduction (DDWR), and disc displacement without reduction (DDWOR). Experiment 2 classified Normal versus disc displacement group (DDWR and DDWOR), and Experiment 3 classified Normal and DDWR versus DDWOR group. The RF model showed higher performance than XGBoost across all three experiments, and in particular, Experiment 3, which differentiated DDWOR from other conditions, achieved the highest accuracy with an area under the receiver operating characteristic curve (AUC) values of 0.86 (RF) and 0.85 (XGBoost). Experiment 2 followed with AUC values of 0.76 (RF) and 0.75 (XGBoost), while Experiment 1, which classified all three groups, had the lowest accuracy of 0.63 (RF) and 0.59 (XGBoost). The RF model, utilizing radiomics features from CBCT images, demonstrated potential as an assistant tool for predicting DDWOR, which requires the most careful management.

CT-free kidney single-photon emission computed tomography for glomerular filtration rate.

Kwon K, Oh D, Kim JH, Yoo J, Lee WW

pubmed logopapersJul 25 2025
This study explores an artificial intelligence-based approach to perform CT-free quantitative SPECT for kidney imaging using Tc-99 m DTPA, aiming to estimate glomerular filtration rate (GFR) without relying on CT. A total of 1000 SPECT/CT scans were used to train and test a deep-learning model that segments kidneys automatically based on synthetic attenuation maps (µ-maps) derived from SPECT alone. The model employed a residual U-Net with edge attention and was optimized using windowing-maximum normalization and a generalized Dice similarity loss function. Performance evaluation showed strong agreement with manual CT-based segmentation, achieving a Dice score of 0.818 ± 0.056 and minimal volume differences of 17.9 ± 43.6 mL (mean ± standard deviation). An additional set of 50 scans confirmed that GFR calculated from the AI-based CT-free SPECT (109.3 ± 17.3 mL/min) was nearly identical to the conventional SPECT/CT method (109.2 ± 18.4 mL/min, p = 0.9396). This CT-free method reduced radiation exposure by up to 78.8% and shortened segmentation time from 40 min to under 1 min. The findings suggest that AI can effectively replace CT in kidney SPECT imaging, maintaining quantitative accuracy while improving safety and efficiency.

XVertNet: Unsupervised Contrast Enhancement of Vertebral Structures with Dynamic Self-Tuning Guidance and Multi-Stage Analysis.

Eidlin E, Hoogi A, Rozen H, Badarne M, Netanyahu NS

pubmed logopapersJul 25 2025
Chest X-ray is one of the main diagnostic tools in emergency medicine, yet its limited ability to capture fine anatomical details can result in missed or delayed diagnoses. To address this, we introduce XVertNet, a novel deep-learning framework designed to enhance vertebral structure visualization in X-ray images significantly. Our framework introduces two key innovations: (1) an unsupervised learning architecture that eliminates reliance on manually labeled training data-a persistent bottleneck in medical imaging, and (2) a dynamic self-tuned internal guidance mechanism featuring an adaptive feedback loop for real-time image optimization. Extensive validation across four major public datasets revealed that XVertNet outperforms state-of-the-art enhancement methods, as demonstrated by improvements in evaluation measures such as entropy, the Tenengrad criterion, LPC-SI, TMQI, and PIQE. Furthermore, clinical validation conducted by two board-certified clinicians confirmed that the enhanced images enabled more sensitive examination of vertebral structural changes. The unsupervised nature of XVertNet facilitates immediate clinical deployment without requiring additional training overhead. This innovation represents a transformative advancement in emergency radiology, providing a scalable and time-efficient solution to enhance diagnostic accuracy in high-pressure clinical environments.

3D-WDA-PMorph: Efficient 3D MRI/TRUS Prostate Registration using Transformer-CNN Network and Wavelet-3D-Depthwise-Attention.

Mahmoudi H, Ramadan H, Riffi J, Tairi H

pubmed logopapersJul 25 2025
Multimodal image registration is crucial in medical imaging, particularly for aligning Magnetic Resonance Imaging (MRI) and Transrectal Ultrasound (TRUS) data, which are widely used in prostate cancer diagnosis and treatment planning. However, this task presents significant challenges due to the inherent differences between these imaging modalities, including variations in resolution, contrast, and noise. Recently, conventional Convolutional Neural Network (CNN)-based registration methods, while effective at extracting local features, often struggle to capture global contextual information and fail to adapt to complex deformations in multimodal data. Conversely, Transformer-based methods excel at capturing long-range dependencies and hierarchical features but face difficulties in integrating fine-grained local details, which are essential for accurate spatial alignment. To address these limitations, we propose a novel 3D image registration framework that combines the strengths of both paradigms. Our method employs a Swin Transformer (ST)-CNN encoder-decoder architecture, with a key innovation focusing on enhancing the skip connection stages. Specifically, we introduce an innovative module named Wavelet-3D-Depthwise-Attention (WDA). The WDA module leverages an attention mechanism that integrates wavelet transforms for multi-scale spatial-frequency representation and 3D-Depthwise convolution to improve computational efficiency and modality fusion. Experimental evaluations on clinical MRI/TRUS datasets confirm that the proposed method achieves a median Dice score of 0.94 and a target registration error of 0.85, indicating an improvement in registration accuracy and robustness over existing state-of-the-art (SOTA) methods. The WDA-enhanced skip connections significantly empower the registration network to preserve critical anatomical details, making our method a promising advancement in prostate multimodal registration. Furthermore, the proposed framework shows strong potential for generalization to other image registration tasks.

Privacy-Preserving Generation of Structured Lymphoma Progression Reports from Cross-sectional Imaging: A Comparative Analysis of Llama 3.3 and Llama 4.

Prucker P, Bressem KK, Kim SH, Weller D, Kader A, Dorfner FJ, Ziegelmayer S, Graf MM, Lemke T, Gassert F, Can E, Meddeb A, Truhn D, Hadamitzky M, Makowski MR, Adams LC, Busch F

pubmed logopapersJul 25 2025
Efficient processing of radiology reports for monitoring disease progression is crucial in oncology. Although large language models (LLMs) show promise in extracting structured information from medical reports, privacy concerns limit their clinical implementation. This study evaluates the feasibility and accuracy of two of the most recent Llama models for generating structured lymphoma progression reports from cross-sectional imaging data in a privacy-preserving, real-world clinical setting. This single-center, retrospective study included adult lymphoma patients who underwent cross-sectional imaging and treatment between July 2023 and July 2024. We established a chain-of-thought prompting strategy to leverage the locally deployed Llama-3.3-70B-Instruct and Llama-4-Scout-17B-16E-Instruct models to generate lymphoma disease progression reports across three iterations. Two radiologists independently scored nodal and extranodal involvement, as well as Lugano staging and treatment response classifications. For each LLM and task, we calculated the F1 score, accuracy, recall, precision, and specificity per label, as well as the case-weighted average with 95% confidence intervals (CIs). Both LLMs correctly implemented the template structure for all 65 patients included in this study. Llama-4-Scout-17B-16E-Instruct demonstrated significantly greater accuracy in extracting nodal and extranodal involvement information (nodal: 0.99 [95% CI = 0.98-0.99] vs. 0.97 [95% CI = 0.95-0.96], p < 0.001; extranodal: 0.99 [95% CI = 0.99-1.00] vs. 0.99 [95% CI = 0.98-0.99], p = 0.013). This difference was more pronounced when predicting Lugano stage and treatment response (stage: 0.85 [95% CI = 0.79-0.89] vs. 0.60 [95% CI = 0.53-0.67], p < 0.001; treatment response: 0.88 [95% CI = 0.83-0.92] vs. 0.65 [95% CI = 0.58-0.71], p < 0.001). Neither model produced hallucinations of newly involved nodal or extranodal sites. The highest relative error rates were found when interpreting the level of disease after treatment. In conclusion, privacy-preserving LLMs can effectively extract clinical information from lymphoma imaging reports. While they excel at data extraction, they are limited in their ability to generate new clinical inferences from the extracted information. Our findings suggest their potential utility in streamlining documentation and highlight areas requiring optimization before clinical implementation.
Page 56 of 3053046 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.