Sort by:
Page 4 of 984 results

PRECISE framework: Enhanced radiology reporting with GPT for improved readability, reliability, and patient-centered care.

Tripathi S, Mutter L, Muppuri M, Dheer S, Garza-Frias E, Awan K, Jha A, Dezube M, Tabari A, Bizzo BC, Dreyer KJ, Bridge CP, Daye D

pubmed logopapersJun 1 2025
The PRECISE framework, defined as Patient-Focused Radiology Reports with Enhanced Clarity and Informative Summaries for Effective Communication, leverages GPT-4 to create patient-friendly summaries of radiology reports at a sixth-grade reading level. The purpose of the study was to evaluate the effectiveness of the PRECISE framework in improving the readability, reliability, and understandability of radiology reports. We hypothesized that the PRECISE framework improves the readability and patient understanding of radiology reports compared to the original versions. The PRECISE framework was assessed using 500 chest X-ray reports. Readability was evaluated using the Flesch Reading Ease, Gunning Fog Index, and Automated Readability Index. Reliability was gauged by clinical volunteers, while understandability was assessed by non-medical volunteers. Statistical analyses including t-tests, regression analyses, and Mann-Whitney U tests were conducted to determine the significance of the differences in readability scores between the original and PRECISE-generated reports. Readability scores significantly improved, with the mean Flesch Reading Ease score increasing from 38.28 to 80.82 (p-value < 0.001), the Gunning Fog Index decreasing from 13.04 to 6.99 (p-value < 0.001), and the ARI score improving from 13.33 to 5.86 (p-value < 0.001). Clinical volunteer assessments found 95 % of the summaries reliable, and non-medical volunteers rated 97 % of the PRECISE-generated summaries as fully understandable. The application of the PRECISE approach demonstrates promise in enhancing patient understanding and communication without adding significant burden to radiologists. With improved reliability and patient-friendly summaries, this approach holds promise for fostering patient engagement and understanding in healthcare decision-making. The PRECISE framework represents a pivotal step towards more inclusive and patient-centric care delivery.

Retaking assessment system based on the inspiratory state of chest X-ray image.

Matsubara N, Teramoto A, Takei M, Kitoh Y, Kawakami S

pubmed logopapersJun 1 2025
When taking chest X-rays, the patient is encouraged to take maximum inspiration and the radiological technologist takes the images at the appropriate time. If the image is not taken at maximum inspiration, retaking of the image is required. However, there is variation in the judgment of whether retaking is necessary between the operators. Therefore, we considered that it might be possible to reduce variation in judgment by developing a retaking assessment system that evaluates whether retaking is necessary using a convolutional neural network (CNN). To train the CNN, the input chest X-ray image and the corresponding correct label indicating whether retaking is necessary are required. However, chest X-ray images cannot distinguish whether inspiration is sufficient and does not need to be retaken, or insufficient and retaking is required. Therefore, we generated input images and labels from dynamic digital radiography (DDR) and conducted the training. Verification using 18 dynamic chest X-ray cases (5400 images) and 48 actual chest X-ray cases (96 images) showed that the VGG16-based architecture achieved an assessment accuracy of 82.3% even for actual chest X-ray images. Therefore, if the proposed method is used in hospitals, it could possibly reduce the variability in judgment between operators.

Development and interpretation of a pathomics-based model for the prediction of immune therapy response in colorectal cancer.

Luo Y, Tian Q, Xu L, Zeng D, Zhang H, Zeng T, Tang H, Wang C, Chen Y

pubmed logopapersMay 31 2025
Colorectal cancer (CRC) is the third most common malignancy and the second leading cause of cancer-related deaths worldwide, with a 5-year survival rate below 20 %. Immunotherapy, particularly immune checkpoint blockade (ICB)-based therapies, has become an important approach for CRC treatment. However, only specific patient subsets demonstrate significant clinical benefits. Although the TIDE algorithm can predict immunotherapy responses, the reliance on transcriptome sequencing data limits its clinical applicability. Recent advances in artificial intelligence and computational pathology provide new avenues for medical image analysis.In this study, we classified TCGA-CRC samples into immunotherapy responder and non-responder groups using the TIDE algorithm. Further, a pathomics model based on convolutional neural networks was constructed to directly predict immunotherapy responses from histopathological images. Single-cell analysis revealed that fibroblasts may induce immunotherapy resistance in CRC through collagen-CD44 and ITGA1 + ITGB1 signaling axes. The developed pathomics model demonstrated excellent classification performance in the test set, with an AUC of 0.88 at the patch level and 0.85 at the patient level. Moreover, key pathomics features were identified through SHAP analysis. This innovative predictive tool provides a novel method for clinical decision-making in CRC immunotherapy, with potential to optimize treatment strategies and advance precision medicine.

LiDSCUNet++: A lightweight depth separable convolutional UNet++ for vertebral column segmentation and spondylosis detection.

Agrawal KK, Kumar G

pubmed logopapersMay 31 2025
Accurate computer-aided diagnosis systems rely on precise segmentation of the vertebral column to assist physicians in diagnosing various disorders. However, segmenting spinal disks and bones becomes challenging in the presence of abnormalities and complex anatomical structures. While Deep Convolutional Neural Networks (DCNNs) achieve remarkable results in medical image segmentation, their performance is limited by data insufficiency and the high computational complexity of existing solutions. This paper introduces LiDSCUNet++, a lightweight deep learning framework based on depthwise-separable and pointwise convolutions integrated with UNet++ for vertebral column segmentation. The model segments vertebral anomalies from dog radiographs, and the results are further processed by YOLOv8 for automated detection of Spondylosis Deformans. LiDSCUNet++ delivers comparable segmentation performance while significantly reducing trainable parameters, memory usage, energy consumption, and computational time, making it an efficient and practical solution for medical image analysis.

A conditional point cloud diffusion model for deformable liver motion tracking via a single arbitrarily-angled x-ray projection.

Xie J, Shao HC, Li Y, Yan S, Shen C, Wang J, Zhang Y

pubmed logopapersMay 30 2025
Deformable liver motion tracking using a single X-ray projection enables real-time motion monitoring and treatment intervention. We introduce a conditional point cloud diffusion model-based framework for accurate and robust liver motion tracking from arbitrarily angled single X-ray projections. We propose a conditional point cloud diffusion model for liver motion tracking (PCD-Liver), which estimates volumetric liver motion by solving deformable vector fields (DVFs) of a prior liver surface point cloud, based on a single X-ray image. It is a patient-specific model of two main components: a rigid alignment model to estimate the liver's overall shifts, and a conditional point cloud diffusion model that further corrects for the liver surface's deformation. Conditioned on the motion-encoded features extracted from a single X-ray projection by a geometry-informed feature pooling layer, the diffusion model iteratively solves detailed liver surface DVFs in a projection angle-agnostic fashion. The liver surface motion solved by PCD-Liver is subsequently fed as the boundary condition into a UNet-based biomechanical model to infer the liver's internal motion to localize liver tumors. A dataset of 10 liver cancer patients was used for evaluation. We used the root mean square error (RMSE) and 95-percentile Hausdorff distance (HD95) metrics to examine the liver point cloud motion estimation accuracy, and the center-of-mass error (COME) to quantify the liver tumor localization error. The mean (±s.d.) RMSE, HD95, and COME of the prior liver or tumor before motion estimation were 8.82 mm (±3.58 mm), 10.84 mm (±4.55 mm), and 9.72 mm (±4.34 mm), respectively. After PCD-Liver's motion estimation, the corresponding values were 3.63 mm (±1.88 mm), 4.29 mm (±1.75 mm), and 3.46 mm (±2.15 mm). Under highly noisy conditions, PCD-Liver maintained stable performance. This study presents an accurate and robust framework for liver deformable motion estimation and tumor localization for image-guided radiotherapy.

HVAngleEst: A Dataset for End-to-end Automated Hallux Valgus Angle Measurement from X-Ray Images.

Wang Q, Ji D, Wang J, Liu L, Yang X, Zhang Y, Liang J, Liu P, Zhao H

pubmed logopapersMay 30 2025
Accurate measurement of hallux valgus angle (HVA) and intermetatarsal angle (IMA) is essential for diagnosing hallux valgus and determining appropriate treatment strategies. Traditional manual measurement methods, while standardized, are time-consuming, labor-intensive, and subject to evaluator bias. Recent advancements in deep learning have been applied to hallux valgus angle estimation, but the development of effective algorithms requires large, well-annotated datasets. Existing X-ray datasets are typically limited to cropped foot regions images, and only one dataset containing very few samples is publicly available. To address these challenges, we introduce HVAngleEst, the first large-scale, open-access dataset specifically designed for hallux valgus angle estimation. HVAngleEst comprises 1,382 X-ray images from 1,150 patients and includes comprehensive annotations, such as foot localization, hallux valgus angles, and line segments for each phalanx. This dataset enables fully automated, end-to-end hallux valgus angle estimation, reducing manual labor and eliminating evaluator bias.

Diagnosis of trigeminal neuralgia based on plain skull radiography using convolutional neural network.

Han JH, Ji SY, Kim M, Kwon JE, Park JB, Kang H, Hwang K, Kim CY, Kim T, Jeong HG, Ahn YH, Chung HT

pubmed logopapersMay 29 2025
This study aimed to determine whether trigeminal neuralgia can be diagnosed using convolutional neural networks (CNNs) based on plain X-ray skull images. A labeled dataset of 166 skull images from patients aged over 16 years with trigeminal neuralgia was compiled, alongside a control dataset of 498 images from patients with unruptured intracranial aneurysms. The images were randomly partitioned into training, validation, and test datasets in a 6:2:2 ratio. Classifier performance was assessed using accuracy and the area under the receiver operating characteristic (AUROC) curve. Gradient-weighted class activation mapping was applied to identify regions of interest. External validation was conducted using a dataset obtained from another institution. The CNN achieved an overall accuracy of 87.2%, with sensitivity and specificity of 0.72 and 0.91, respectively, and an AUROC of 0.90 on the test dataset. In most cases, the sphenoid body and clivus were identified as key areas for predicting trigeminal neuralgia. Validation on the external dataset yielded an accuracy of 71.0%, highlighting the potential of deep learning-based models in distinguishing X-ray skull images of patients with trigeminal neuralgia from those of control individuals. Our preliminary results suggest that plain x-ray can be potentially used as an adjunct to conventional MRI, ideally with CISS sequences, to aid in the clinical diagnosis of TN. Further refinement could establish this approach as a valuable screening tool.

Exploring best-performing radiomic features with combined multilevel discrete wavelet decompositions for multiclass COVID-19 classification using chest X-ray images.

Özcan H

pubmed logopapersMay 29 2025
Discrete wavelet transforms have been applied in many machine learning models for the analysis of COVID-19; however, little is known about the impact of combined multilevel wavelet decompositions for the disease identification. This study proposes a computer-aided diagnosis system for addressing the combined multilevel effects of multiscale radiomic features on multiclass COVID-19 classification using chest X-ray images. A two-level discrete wavelet transform was applied to an optimal region of interest to obtain multiscale decompositions. Both approximation and detail coefficients were extensively investigated in varying frequency bands through 1240 experimental models. High dimensionality in the feature space was managed using a proposed filter- and wrapper-based feature selection approach. A comprehensive comparison was conducted between the bands and features to explore best-performing ensemble algorithm models. The results indicated that incorporating multilevel decompositions could lead to improved model performance. An inclusive region of interest, encompassing both lungs and the mediastinal regions, was identified to enhance feature representation. The light gradient-boosting machine, applied on combined bands with the features of basic, gray-level, Gabor, histogram of oriented gradients and local binary patterns, achieved the highest weighted precision, sensitivity, specificity, and accuracy of 97.50 %, 97.50 %, 98.75 %, and 97.50 %, respectively. The COVID-19-versus-the-rest receiver operating characteristic area under the curve was 0.9979. These results underscore the potential of combining decomposition levels with the original signals and employing an inclusive region of interest for effective COVID-19 detection, while the feature selection and training processes remain efficient within a practical computational time.

Estimating Total Lung Volume from Pixel-Level Thickness Maps of Chest Radiographs Using Deep Learning.

Dorosti T, Schultheiss M, Schmette P, Heuchert J, Thalhammer J, Gassert FT, Sellerer T, Schick R, Taphorn K, Mechlem K, Birnbacher L, Schaff F, Pfeiffer F, Pfeiffer D

pubmed logopapersMay 28 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To estimate the total lung volume (TLV) from real and synthetic frontal chest radiographs (CXR) on a pixel level using lung thickness maps generated by a U-Net deep learning model. Materials and Methods This retrospective study included 5,959 chest CT scans from two public datasets: the lung nodule analysis 2016 (<i>n</i> = 656) and the Radiological Society of North America (RSNA) pulmonary embolism detection challenge 2020 (<i>n</i> = 5,303). Additionally, 72 participants were selected from the Klinikum Rechts der Isar dataset (October 2018 to December 2019), each with a corresponding chest radiograph taken within seven days. Synthetic radiographs and lung thickness maps were generated using forward projection of CT scans and their lung segmentations. A U-Net model was trained on synthetic radiographs to predict lung thickness maps and estimate TLV. Model performance was assessed using mean squared error (MSE), Pearson correlation coefficient <b>(r)</b>, and two-sided Student's t-distribution. Results The study included 72 participants (45 male, 27 female, 33 healthy: mean age 62 years [range 34-80]; 39 with chronic obstructive pulmonary disease: mean age 69 years [range 47-91]). TLV predictions showed low error rates (MSEPublic-Synthetic = 0.16 L<sup>2</sup>, MSEKRI-Synthetic = 0.20 L<sup>2</sup>, MSEKRI-Real = 0.35 L<sup>2</sup>) and strong correlations with CT-derived reference standard TLV (nPublic-Synthetic = 1,191, r = 0.99, <i>P</i> < .001; nKRI-Synthetic = 72, r = 0.97, <i>P</i> < .001; nKRI-Real = 72, r = 0.91, <i>P</i> < .001). When evaluated on different datasets, the U-Net model achieved the highest performance for TLV estimation on the Luna16 test dataset, with the lowest mean squared error (MSE = 0.09 L<sup>2</sup>) and strongest correlation (<i>r</i> = 0.99, <i>P</i> <.001) compared with CT-derived TLV. Conclusion The U-Net-generated pixel-level lung thickness maps successfully estimated TLV for both synthetic and real radiographs. ©RSNA, 2025.

An AI system for continuous knee osteoarthritis severity grading: An anomaly detection inspired approach with few labels.

Belton N, Lawlor A, Curran KM

pubmed logopapersMay 28 2025
The diagnostic accuracy and subjectivity of existing Knee Osteoarthritis (OA) ordinal grading systems has been a subject of on-going debate and concern. Existing automated solutions are trained to emulate these imperfect systems, whilst also being reliant on large annotated databases for fully-supervised training. This work proposes a three stage approach for automated continuous grading of knee OA that is built upon the principles of Anomaly Detection (AD); learning a robust representation of healthy knee X-rays and grading disease severity based on its distance to the centre of normality. In the first stage, SS-FewSOME is proposed, a self-supervised AD technique that learns the 'normal' representation, requiring only examples of healthy subjects and <3% of the labels that existing methods require. In the second stage, this model is used to pseudo label a subset of unlabelled data as 'normal' or 'anomalous', followed by denoising of pseudo labels with CLIP. The final stage involves retraining on labelled and pseudo labelled data using the proposed Dual Centre Representation Learning (DCRL) which learns the centres of two representation spaces; normal and anomalous. Disease severity is then graded based on the distance to the learned centres. The proposed methodology outperforms existing techniques by margins of up to 24% in terms of OA detection and the disease severity scores correlate with the Kellgren-Lawrence grading system at the same level as human expert performance. Code available at https://github.com/niamhbelton/SS-FewSOME_Disease_Severity_Knee_Osteoarthritis.
Page 4 of 984 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.