Sort by:
Page 5 of 4454447 results

Evaluation of Context-Aware Prompting Techniques for Classification of Tumor Response Categories in Radiology Reports Using Large Language Model.

Park J, Sim WS, Yu JY, Park YR, Lee YH

pubmed logopapersSep 29 2025
Radiology reports are essential for medical decision-making, providing crucial data for diagnosing diseases, devising treatment plans, and monitoring disease progression. While large language models (LLMs) have shown promise in processing free-text reports, research on effective prompting techniques for radiologic applications remains limited. To evaluate the effectiveness of LLM-driven classification based on radiology reports in terms of tumor response category (TRC), and to optimize the model through a comparison of four different prompt engineering techniques for effectively performing this classification task in clinical applications, we included 3062 whole-spine contrast-enhanced magnetic resonance imaging (MRI) radiology reports for prompt engineering and validation. TRCs were labeled by two radiologists based on criteria modified from the Response Evaluation Criteria in Solid Tumors (RECIST) guidelines. The Llama3 instruct model was used to classify TRCs in this study through four different prompts: General, In-Context Learning (ICL), Chain-of-Thought (CoT), and ICL with CoT. AUROC, accuracy, precision, recall, and F1-score were calculated against each prompt and model (8B, 70B) with the test report dataset. The average AUROC for ICL (0.96 internally, 0.93 externally) and ICL with CoT prompts (0.97 internally, 0.94 externally) outperformed other prompts. Error increased with prompt complexity, including 0.8% incomplete sentence errors and 11.3% probability-classification inconsistencies. This study demonstrates that context-aware LLM prompts substantially improved the efficiency and effectiveness of classifying TRCs from radiology reports, despite potential intrinsic hallucinations. While further improvements are required for real-world application, our findings suggest that context-aware prompts have significant potential for segmenting complex radiology reports and enhancing oncology clinical workflows.

Artificial Intelligence Deep Learning Ultrasound Discrimination of Cosmetic Fillers: A Multicenter Study.

Wortsman X, Lozano M, Rodriguez FJ, Valderrama Y, Ortiz-Orellana G, Zattar L, de Cabo F, Ducati E, Sigrist R, Fontan C, Rezende J, Gonzalez C, Schelke L, Zavariz J, Barrera P, Velthuis P

pubmed logopapersSep 29 2025
Despite the growing use of artificial intelligence (AI) in medicine, imaging, and dermatology, to date, there is no information on the use of AI for discriminating cosmetic fillers on ultrasound (US). An international collaborative group working in dermatologic and esthetic US was formed and worked with the staff of the Department of Computer Science and AI of the Universidad de Granada to gather and process a relevant number of anonymized images. AI techniques based on deep learning (DL) with YOLO (you only look once) architecture, together with a bounding box annotation tool, allowed experts to manually delineate regions of interest for the discrimination of common cosmetic fillers under real-world conditions. A total of 14 physicians from 6 countries participated in the AI study and compiled a final dataset comprising 1432 US images, including HA (hyaluronic acid), PMMA (polymethylmethacrylate), CaHA (calcium hydroxyapatite), and SO (silicone oil) filler cases. The model exhibits robust and consistent classification performance, with an average accuracy of 0.92 ± 0.04 across the cross-validation folds. YOLOv11 demonstrated outstanding performance in the detection of HA and SO, yielding F1 scores of 0.96 ± 0.02 and 0.94 ± 0.04, respectively. On the other hand, CaHA and PMMA show somewhat lower and less consistent performance in terms of precision and recall, with F1-scores around 0.83. AI using YOLOv11 allowed us to discriminate reliably between HA and SO using different complexity high-frequency US devices and operators. Further AI DL-specific work is needed to identify CaHA and PMMA more accurately.

A Scalable Distributed Framework for Multimodal GigaVoxel Image Registration

Rohit Jena, Vedant Zope, Pratik Chaudhari, James C. Gee

arxiv logopreprintSep 29 2025
In this work, we propose FFDP, a set of IO-aware non-GEMM fused kernels supplemented with a distributed framework for image registration at unprecedented scales. Image registration is an inverse problem fundamental to biomedical and life sciences, but algorithms have not scaled in tandem with image acquisition capabilities. Our framework complements existing model parallelism techniques proposed for large-scale transformer training by optimizing non-GEMM bottlenecks and enabling convolution-aware tensor sharding. We demonstrate unprecedented capabilities by performing multimodal registration of a 100 micron ex-vivo human brain MRI volume at native resolution - an inverse problem more than 570x larger than a standard clinical datum in about a minute using only 8 A6000 GPUs. FFDP accelerates existing state-of-the-art optimization and deep learning registration pipelines by upto 6 - 7x while reducing peak memory consumption by 20 - 59%. Comparative analysis on a 250 micron dataset shows that FFDP can fit upto 64x larger problems than existing SOTA on a single GPU, and highlights both the performance and efficiency gains of FFDP compared to SOTA image registration methods.

Enhancing Spinal Cord and Canal Segmentation in Degenerative Cervical Myelopathy : The Role of Interactive Learning Models with manual Click.

Han S, Oh JK, Cho W, Kim TJ, Hong N, Park SB

pubmed logopapersSep 29 2025
We aim to develop an interactive segmentation model that can offer accuracy and reliability for the segmentation of the irregularly shaped spinal cord and canal in degenerative cervical myelopathy (DCM) through manual click and model refinement. A dataset of 1444 frames from 294 magnetic resonance imaging records of DCM patients was used and we developed two different segmentation models for comparison : auto-segmentation and interactive segmentation. The former was based on U-Net and utilized a pretrained ConvNeXT-tiny as its encoder. For the latter, we employed an interactive segmentation model structured by SimpleClick, a large model that utilizes a vision transformer as its backbone, together with simple fine-tuning. The segmentation performance of the two models were compared in terms of their Dice scores, mean intersection over union (mIoU), Average Precision and Hausdorff distance. The efficiency of the interactive segmentation model was evaluated by the number of clicks required to achieve a target mIoU. Our model achieved better scores across all four-evaluation metrics for segmentation accuracy, showing improvements of +6.4%, +1.8%, +3.7%, and -53.0% for canal segmentation, and +11.7%, +6.0%, +18.2%, and -70.9% for cord segmentation with 15 clicks, respectively. The required clicks for the interactive segmentation model to achieve a 90% mIoU for spinal canal with cord cases and 80% mIoU for spinal cord cases were 11.71 and 11.99, respectively. We found that the interactive segmentation model significantly outperformed the auto-segmentation model. By incorporating simple manual inputs, the interactive model effectively identified regions of interest, particularly in the complex and irregular shapes of the spinal cord, demonstrating both enhanced accuracy and adaptability.

Diffusion Model-Based Design of Bionic Bone Scaffolds with Tunable Microstructures.

Chen J, Shen S, Xu L, Zheng Z, Zou X, Ye M, Zhang C, Liu H, Yao P, Xu RX

pubmed logopapersSep 29 2025
In the clinical treatment of bone defects that exceed the critical size threshold, traditional methods using metal fixation devices, autografts, and allografts exhibit significant limitations. Meanwhile, bone scaffolds with minimal risks of secondary injury, low immune rejection are emerging as a promising alternative. The effective design of porosity, pore size, and trabecular thickness in bone scaffolds is critical; however, current strategies often struggle to optimally balance these parameters. Here, we propose a bionic bone scaffold design method that mimics multiple properties of natural cancellous bone using a diffusion model. First, we develop a classifier-free conditional diffusion model and train it on a Micro-CT (μCT) image dataset of porcine vertebral cancellous bone. The training model can produce personalized 2-dimensional images of natural-like bone with tunable microstructures. Subsequently, we stack images layer by layer to form 3-dimensional scaffolds, mimicking the CT/μCT image reconstruction process. Finally, computational fluid dynamics analysis is conducted to validate the scaffold models' fluid properties, while bioresin bone scaffold samples are 3D-printed for mechanical testing and biocompatibility assessment. The three key morphological parameters of the generated images-porosity (50-70%), pore size (468-936 μm), and trabecular thickness (156-312 μm)-can be precisely and independently controlled. Fluid simulation and mechanical testing confirm scaffolds' robust performance in permeability (10⁻⁹ to 10⁻⁸ m<sup>2</sup>), average fluid shear stress (0.1-0.3 Pa), Young's modulus (14-fold adjustable range), compressive strength (9-fold adjustable range), and viscoelastic properties. The scaffolds also exhibit good biocompatibility, meeting the basic requirements for clinical implantation. These promising results highlight the potential of our method for the personalized design of scaffolds to effectively repair large bone defects.

Elemental composition analysis of calcium-based urinary stones via laser-induced breakdown spectroscopy for enhanced clinical insights.

Xie H, Huang J, Wang R, Ma X, Xie L, Zhang H, Li J, Liu C

pubmed logopapersSep 29 2025
The purpose of this study was to profile elemental composition of calcium-based urinary stones using laser-induced breakdown spectroscopy (LIBS) and develop a machine learning model to distinguish recurrence-associated profiles by integrating elemental and clinical data. A total of 122 calcium-based stones (41 calcium oxalate, 11 calcium phosphate, 49 calcium oxalate/calcium phosphate, 8 calcium oxalate/uric acid, 13 calcium phosphate/struvite) were analyzed via LIBS. Elemental intensity ratios (H/Ca, P/Ca, Mg/Ca, Sr/Ca, Na/Ca, K/Ca) were calculated using Ca (396.847 nm) as reference. Clinical variables (demographics, laboratory and imaging results, recurrence status) were retrospectively collected. A back propagation neural network (BPNN) model was trained using four data strategies: clinical-only, spectral principal components (PCs), combined PCs plus clinical, and merged raw spectral plus clinical data. The performance of these four models was evaluated. Sixteen stone samples from other medical centers were used as external validation sets. Mg and Sr were detected in most of stones. Significant correlations existed among P, Mg, Sr, and K ratios. Recurrent patients showed elevated elemental ratios (p < 0.01), higher urine pH (p < 0.01), and lower stone CT density (p = 0.044). The BPNN model with merged spectral plus clinical data achieved optimal performance in classification (test set accuracy: 94.37%), significantly outperforming clinical-only models (test set accuracy: 73.37%). The results of external validation indicate that the model has good generalization ability. LIBS reveals ubiquitous Mg and Sr in calcium-based stones and elevated elemental ratios in recurrent cases. Integration of elemental profiles with clinical data enables high-accuracy classification of recurrence-associated profiles, providing insights for potential risk stratification in urolithiasis management.

Causal-Adapter: Taming Text-to-Image Diffusion for Faithful Counterfactual Generation

Lei Tong, Zhihua Liu, Chaochao Lu, Dino Oglic, Tom Diethe, Philip Teare, Sotirios A. Tsaftaris, Chen Jin

arxiv logopreprintSep 29 2025
We present Causal-Adapter, a modular framework that adapts frozen text-to-image diffusion backbones for counterfactual image generation. Our method enables causal interventions on target attributes, consistently propagating their effects to causal dependents without altering the core identity of the image. In contrast to prior approaches that rely on prompt engineering without explicit causal structure, Causal-Adapter leverages structural causal modeling augmented with two attribute regularization strategies: prompt-aligned injection, which aligns causal attributes with textual embeddings for precise semantic control, and a conditioned token contrastive loss to disentangle attribute factors and reduce spurious correlations. Causal-Adapter achieves state-of-the-art performance on both synthetic and real-world datasets, with up to 91\% MAE reduction on Pendulum for accurate attribute control and 87\% FID reduction on ADNI for high-fidelity MRI image generation. These results show that our approach enables robust, generalizable counterfactual editing with faithful attribute modification and strong identity preservation.

Clinical and MRI markers for acute vs chronic temporomandibular disorders using a machine learning and deep neural networks.

Lee YH, Jeon S, Kim DH, Auh QS, Lee JH, Noh YK

pubmed logopapersSep 29 2025
Exploring the transition from acute to chronic temporomandibular disorders (TMD) remains challenging due to the multifactorial nature of the disease. This study aims to identify clinical, behavioral, and imaging-based predictors that contribute to symptom chronicity in patients with TMD. We enrolled 239 patients with TMD (161 women, 78 men; mean age 35.60 ± 17.93 years), classified as acute ( < 6 months) or chronic ( ≥ 6 months) based on symptom duration. TMD was diagnosed according to the Diagnostic Criteria for TMD (DC/TMD Axis I). Clinical data, sleep-related variables, and temporomandibular joint magnetic resonance imaging (MRI) were collected. MRI assessments included anterior disc displacement (ADD), joint space narrowing, osteoarthritis, and effusion using 3 T T2-weighted and proton density scans. Predictors were evaluated using logistic regression and deep neural networks (DNN), and performance was compared. Chronic TMD is observed in 51.05% of patients. Compared to acute cases, chronic TMD is more frequently associated with TMJ noise (70.5%), bruxism (31.1%), and higher pain intensity (VAS: 4.82 ± 2.47). They also have shorter sleep and higher STOP-Bang scores, indicating greater risk of obstructive sleep apnea. MRI findings reveal increased prevalence of ADD (86.9%), TMJ-OA (82.0%), and joint space narrowing (88.5%) in chronic TMD. Logistic regression achieves an AUROC of 0.7550 (95% CI: 0.6550-0.8550), identifying TMJ noise, bruxism, VAS, sleep disturbance, STOP-Bang≥5, ADD, and joint space narrowing as significant predictors. The DNN model improves accuracy to 79.49% compared to 75.50%, though the difference is not statistically significant (p = 0.3067). Behavioral and TMJ-related structural factors are key predictors of chronic TMD and may aid early identification. Timely recognition may support personalized strategies and improve outcomes.

Clinical application of deep learning for enhanced multistage caries detection in panoramic radiographs.

Pornprasertsuk-Damrongsri S, Vachmanus S, Papasratorn D, Kitisubkanchana J, Chaikantha S, Arayasantiparb R, Mongkolwat P

pubmed logopapersSep 29 2025
The detection of dental caries is typically overlooked on panoramic radiographs. This study aims to leverage deep learning to identify multistage caries on panoramic radiographs. The panoramic radiographs were confirmed with the gold standard bitewing radiographs to create a reliable ground truth. The dataset of 500 panoramic radiographs with corresponding bitewing confirmations was labelled by an experienced and calibrated radiologist for 1,792 caries from 14,997 teeth. The annotations were stored using the annotation and image markup standard to ensure consistency and reliability. The deep learning system employed a two-model approach: YOLOv5 for tooth detection and Attention U-Net for segmenting caries. The system achieved impressive results, demonstrating strong agreement with dentists for both caries counts and classifications (enamel, dentine, and pulp). However, some discrepancies exist, particularly in underestimating enamel caries. While the model occasionally overpredicts caries in healthy teeth (false positive), it prioritizes minimizing missed lesions (false negative), achieving a high recall of 0.96. Overall performance surpasses previously reported values, with an F1-score of 0.85 and an accuracy of 0.93 for caries segmentation in posterior teeth. The deep learning approach demonstrates promising potential to aid dentists in caries diagnosis, treatment planning, and dental education.

Evaluating Temperature Scaling Calibration Effectiveness for CNNs under Varying Noise Levels in Brain Tumour Detection

Ankur Chanda, Kushan Choudhury, Shubhrodeep Roy, Shubhajit Biswas, Somenath Kuiry

arxiv logopreprintSep 29 2025
Precise confidence estimation in deep learning is vital for high-stakes fields like medical imaging, where overconfident misclassifications can have serious consequences. This work evaluates the effectiveness of Temperature Scaling (TS), a post-hoc calibration technique, in improving the reliability of convolutional neural networks (CNNs) for brain tumor classification. We develop a custom CNN and train it on a merged brain MRI dataset. To simulate real-world uncertainty, five types of image noise are introduced: Gaussian, Poisson, Salt & Pepper, Speckle, and Uniform. Model performance is evaluated using precision, recall, F1-score, accuracy, negative log-likelihood (NLL), and expected calibration error (ECE), both before and after calibration. Results demonstrate that TS significantly reduces ECE and NLL under all noise conditions without degrading classification accuracy. This underscores TS as an effective and computationally efficient approach to enhance decision confidence of medical AI systems, hence making model outputs more reliable in noisy or uncertain settings.
Page 5 of 4454447 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.