Sort by:
Page 77 of 1331324 results

Artificial Intelligence in Value-Based Health Care.

Shah R, Bozic KJ, Jayakumar P

pubmed logopapersMay 28 2025
Artificial intelligence (AI) presents new opportunities to advance value-based healthcare in orthopedic surgery through 3 potential mechanisms: agency, automation, and augmentation. AI may enhance patient agency through improved health literacy and remote monitoring while reducing costs through triage and reduction in specialist visits. In automation, AI optimizes operating room scheduling and streamlines administrative tasks, with documented cost savings and improved efficiency. For augmentation, AI has been shown to be accurate in diagnostic imaging interpretation and surgical planning, while enabling more precise outcome predictions and personalized treatment approaches. However, implementation faces substantial challenges, including resistance from healthcare professionals, technical barriers to data quality and privacy, and significant financial investments required for infrastructure. Success in healthcare AI integration requires careful attention to regulatory frameworks, data privacy, and clinical validation.

Quantitative computed tomography imaging classification of cement dust-exposed patients-based Kolmogorov-Arnold networks.

Chau NK, Kim WJ, Lee CH, Chae KJ, Jin GY, Choi S

pubmed logopapersMay 27 2025
Occupational health assessment is critical for detecting respiratory issues caused by harmful exposures, such as cement dust. Quantitative computed tomography (QCT) imaging provides detailed insights into lung structure and function, enhancing the diagnosis of lung diseases. However, its high dimensionality poses challenges for traditional machine learning methods. In this study, Kolmogorov-Arnold networks (KANs) were used for the binary classification of QCT imaging data to assess respiratory conditions associated with cement dust exposure. The dataset comprised QCT images from 609 individuals, including 311 subjects exposed to cement dust and 298 healthy controls. We derived 141 QCT-based variables and employed KANs with two hidden layers of 15 and 8 neurons. The network parameters, including grid intervals, polynomial order, learning rate, and penalty strengths, were carefully fine-tuned. The performance of the model was assessed through various metrics, including accuracy, precision, recall, F1 score, specificity, and the Matthews Correlation Coefficient (MCC). A five-fold cross-validation was employed to enhance the robustness of the evaluation. SHAP analysis was applied to interpret the sensitive QCT features. The KAN model demonstrated consistently high performance across all metrics, with an average accuracy of 98.03 %, precision of 97.35 %, recall of 98.70 %, F1 score of 98.01 %, and specificity of 97.40 %. The MCC value further confirmed the robustness of the model in managing imbalanced datasets. The comparative analysis demonstrated that the KAN model outperformed traditional methods and other deep learning approaches, such as TabPFN, ANN, FT-Transformer, VGG19, MobileNets, ResNet101, XGBoost, SVM, random forest, and decision tree. SHAP analysis highlighted structural and functional lung features, such as airway geometry, wall thickness, and lung volume, as key predictors. KANs significantly improved the classification of QCT imaging data, enhancing early detection of cement dust-induced respiratory conditions. SHAP analysis supported model interpretability, enhancing its potential for clinical translation in occupational health assessments.

Development and validation of a CT-based radiomics machine learning model for differentiating immune-related interstitial pneumonia.

Luo T, Guo J, Xi J, Luo X, Fu Z, Chen W, Huang D, Chen K, Xiao Q, Wei S, Wang Y, Du H, Liu L, Cai S, Dong H

pubmed logopapersMay 27 2025
Immune checkpoint inhibitor-related interstitial pneumonia (CIP) poses a diagnostic challenge due to its radiographic similarity to other pneumonias. We developed a non-invasive model using CT imaging to differentiate CIP from other pneumonias (OTP). We analyzed CIP and OTP patients after the immunotherapy from five medical centers between 2020 and 2023, and randomly divided into training and validation in 7:3. A radiomics model was developed using random forest analysis. A new model was then built by combining independent risk factors for CIP. The models were evaluated using ROC, calibration, and decision curve analysis. A total of 238 patients with pneumonia following immunotherapy were included, with 116 CIP and 122 OTP. After random allocation, the training cohort included 166 patients, and the validation included 72 patients. A radiomics model composed of 11 radiomic features was established using the random forest method, with an AUC of 0.833 for the training cohort and 0.821 for the validation. Univariate and multivariate logistic regression analysis revealed significant differences in smoking history, radiotherapy history, and radiomics score between CIP and OTP (p < 0.05). A new model was constructed based on these three factors and a nomogram was drawn. This model showed good calibration and net benefit in both the training and validation cohorts, with AUCs of 0.872 and 0.860, respectively. Using the random forest method of machine learning, we successfully constructed a CT-based radiomics CIP differential diagnostic model that can accurately, non-invasively, and rapidly provide clinicians with etiological support for pneumonia diagnosis.

A Left Atrial Positioning System to Enable Follow-Up and Cohort Studies.

Mehringer NJ, McVeigh ER

pubmed logopapersMay 27 2025
We present a new algorithm to automatically convert 3-dimensional left atrium surface meshes into a standard 2-dimensional space: a Left Atrial Positioning System (LAPS). Forty-five contrast-enhanced 4- dimensional computed tomography datasets were collected from 30 subjects. The left atrium volume was segmented using a trained neural network and converted into a surface mesh. LAPS coordinates were calculated on each mesh by computing lines of longitude and latitude on the surface of the mesh with reference to the center of the posterior wall and the mitral valve. LAPS accuracy was evaluated with one-way transfer of coordinates from a template mesh to a synthetic ground truth, which was created by registering the template mesh and pre-calculated LAPS coordinates to a target mesh. The Euclidian distance error was measured between each test node and its ground truth location. The median point transfer error was 2.13 mm between follow-up scans of the same subject (n = 15) and 3.99 mm between different subjects (n = 30). The left atrium was divided into 24 anatomic regions and represented on a 2D square diagram. The Left Atrial Positioning System is fully automatic, accurate, robust to anatomic variation, and has flexible visualization for mapping data in the left atrium. This provides a framework for comparing regional LA surface data values in both follow-up and cohort studies.

Modeling Brain Aging with Explainable Triamese ViT: Towards Deeper Insights into Autism Disorder.

Zhang Z, Aggarwal V, Angelov P, Jiang R

pubmed logopapersMay 27 2025
Machine learning, particularly through advanced imaging techniques such as three-dimensional Magnetic Resonance Imaging (MRI), has significantly improved medical diagnostics. This is especially critical for diagnosing complex conditions like Alzheimer's disease. Our study introduces Triamese-ViT, an innovative Tri-structure of Vision Transformers (ViTs) that incorporates a built-in interpretability function, it has structure-aware explainability that allows for the identification and visualization of key features or regions contributing to the prediction, integrates information from three perspectives to enhance brain age estimation. This method not only increases accuracy but also improves interoperability with existing techniques. When evaluated, Triamese-ViT demonstrated superior performance and produced insightful attention maps. We applied these attention maps to the analysis of natural aging and the diagnosis of Autism Spectrum Disorder (ASD). The results aligned with those from occlusion analysis, identifying the Cingulum, Rolandic Operculum, Thalamus, and Vermis as important regions in normal aging, and highlighting the Thalamus and Caudate Nucleus as key regions for ASD diagnosis.

ToPoMesh: accurate 3D surface reconstruction from CT volumetric data via topology modification.

Chen J, Zhu Q, Xie B, Li T

pubmed logopapersMay 27 2025
Traditional computed tomography (CT) methods for 3D reconstruction face resolution limitations and require time-consuming post-processing workflows. While deep learning techniques improve the accuracy of segmentation, traditional voxel-based segmentation and surface reconstruction pipelines tend to introduce artifacts such as disconnected regions, topological inconsistencies, and stepped distortions. To overcome these challenges, we propose ToPoMesh, an end-to-end 3D mesh reconstruction deep learning framework for direct reconstruction of high-fidelity surface meshes from CT volume data. To address the existing problems, our approach introduces three core innovations: (1) accurate local and global shape modeling by preserving and enhancing local feature information through residual connectivity and self-attention mechanisms in graph convolutional networks; (2) an adaptive variant density (Avd) mesh de-pooling strategy, which dynamically optimizes the vertex distribution; (3) a topology modification module that iteratively prunes the error surfaces and boundary smoothing via variable regularity terms to obtain finer mesh surfaces. Experiments on the LiTS, MSD pancreas tumor, MSD hippocampus, and MSD spleen datasets demonstrate that ToPoMesh outperforms state-of-the-art methods. Quantitative evaluations demonstrate a 57.4% reduction in Chamfer distance (liver) and a 0.47% improvement in F-score compared to end-to-end 3D reconstruction methods, while qualitative results confirm enhanced fidelity for thin structures and complex anatomical topologies versus segmentation frameworks. Importantly, our method eliminates the need for manual post-processing, realizes the ability to reconstruct 3D meshes from images, and can provide precise guidance for surgical planning and diagnosis.

Dual-energy CT combined with histogram parameters in the assessment of perineural invasion in colorectal cancer.

Wang Y, Tan H, Li S, Long C, Zhou B, Wang Z, Cao Y

pubmed logopapersMay 27 2025
The purpose is to evaluate the predictive value of dual-energy CT (DECT) combined with histogram parameters and a clinical prediction model for perineural invasion (PNI) in colorectal cancer (CRC). We retrospectively analyzed clinical and imaging data from 173 CRC patients who underwent preoperative DECT-enhanced scanning at two centers. Data from Qinghai University Affiliated Hospital (n = 120) were randomly divided into training and validation sets, while data from Lanzhou University Second Hospital (n = 53) served as the external validation set. Regions of interest (ROIs) were delineated to extract spectral and histogram parameters, and multivariate logistic regression identified optimal predictors. Six machine learning models-support vector machine (SVM), decision tree (DT), random forest (RF), logistic regression (LR), k-nearest neighbors (KNN), and extreme gradient boosting (XGBoost)-were constructed. Model performance and clinical utility were assessed using receiver operating characteristic (ROC) curves, calibration curves, and decision curve analysis (DCA). Four independent predictive factors were identified through multivariate analysis: entropy, CT40<sub>KeV</sub>, CEA, and skewness. Among the six classifier models, RF model demonstrated the best performance in the training set (AUC = 0.918, 95% CI: 0.862-0.969). In the validation set, RF outperformed other models (AUC = 0.885, 95% CI: 0.772-0.972). Notably, in the external validation set, the XGBoost model achieved the highest performance (AUC = 0.823, 95% CI: 0.672-0.945). Dual-energy CT-based combined with histogram parameters and clinical prediction modeling can be effectively used for preoperative noninvasive assessment of perineural invasion in colorectal cancer.

Development of an Open-Source Algorithm for Automated Segmentation in Clinician-Led Paranasal Sinus Radiologic Research.

Darbari Kaul R, Zhong W, Liu S, Azemi G, Liang K, Zou E, Sacks PL, Thiel C, Campbell RG, Kalish L, Sacks R, Di Ieva A, Harvey RJ

pubmed logopapersMay 27 2025
Artificial Intelligence (AI) research needs to be clinician led; however, expertise typically lies outside their skill set. Collaborations exist but are often commercially driven. Free and open-source computational algorithms and software expertise are required for meaningful clinically driven AI medical research. Deep learning algorithms automate segmenting regions of interest for analysis and clinical translation. Numerous studies have automatically segmented paranasal sinus computed tomography (CT) scans; however, openly accessible algorithms capturing the sinonasal cavity remain scarce. The purpose of this study was to validate and provide an open-source segmentation algorithm for paranasal sinus CTs for the otolaryngology research community. A cross-sectional comparative study was conducted with a deep learning algorithm, UNet++, modified for automatic segmentation of paranasal sinuses CTs and "ground-truth" manual segmentations. A dataset of 100 paranasal sinuses scans was manually segmented, with an 80/20 training/testing split. The algorithm is available at https://github.com/rheadkaul/SinusSegment. Primary outcomes included the Dice similarity coefficient (DSC) score, Intersection over Union (IoU), Hausdorff distance (HD), sensitivity, specificity, and visual similarity grading. Twenty scans representing 7300 slices were assessed. The mean DSC was 0.87 and IoU 0.80, with HD 33.61 mm. The mean sensitivity was 83.98% and specificity 99.81%. The median visual similarity grading score was 3 (good). There were no statistically significant differences in outcomes with normal or diseased paranasal sinus CTs. Automatic segmentation of CT paranasal sinuses yields good results when compared with manual segmentation. This study provides an open-source segmentation algorithm as a foundation and gateway for more complex AI-based analysis of large datasets.

Automatic assessment of lower limb deformities using high-resolution X-ray images.

Rostamian R, Panahi MS, Karimpour M, Nokiani AA, Khaledi RJ, Kashani HG

pubmed logopapersMay 27 2025
Planning an osteotomy or arthroplasty surgery on a lower limb requires prior classification/identification of its deformities. The detection of skeletal landmarks and the calculation of angles required to identify the deformities are traditionally done manually, with measurement accuracy relying considerably on the experience of the individual doing the measurements. We propose a novel, image pyramid-based approach to skeletal landmark detection. The proposed approach uses a Convolutional Neural Network (CNN) that receives the raw X-ray image as input and produces the coordinates of the landmarks. The landmark estimations are modified iteratively via the error feedback method to come closer to the target. Our clinically produced full-leg X-Rays dataset is made publically available and used to train and test the network. Angular quantities are calculated based on detected landmarks. Angles are then classified as lower than normal, normal or higher than normal according to predefined ranges for a normal condition. The performance of our approach is evaluated at several levels: landmark coordinates accuracy, angles' measurement accuracy, and classification accuracy. The average absolute error (difference between automatically and manually determined coordinates) for landmarks was 0.79 ± 0.57 mm on test data, and the average absolute error (difference between automatically and manually calculated angles) for angles was 0.45 ± 0.42°. Results from multiple case studies involving high-resolution images show that the proposed approach outperforms previous deep learning-based approaches in terms of accuracy and computational cost. It also enables the automatic detection of the lower limb misalignments in full-leg x-ray images.

Multicentre evaluation of deep learning CT autosegmentation of the head and neck region for radiotherapy.

Pang EPP, Tan HQ, Wang F, Niemelä J, Bolard G, Ramadan S, Kiljunen T, Capala M, Petit S, Seppälä J, Vuolukka K, Kiitam I, Zolotuhhin D, Gershkevitsh E, Lehtiö K, Nikkinen J, Keyriläinen J, Mokka M, Chua MLK

pubmed logopapersMay 27 2025
This is a multi-institutional study to evaluate a head-and-neck CT auto-segmentation software across seven institutions globally. 11 lymph node levels and 7 organs-at-risk contours were evaluated in a two-phase study design. Time savings were measured in both phases, and the inter-observer variability across the seven institutions was quantified in phase two. Overall time savings were found to be 42% in phase one and 49% in phase two. Lymph node levels IA, IB, III, IVA, and IVB showed no significant time savings, with some centers reporting longer editing times than manual delineation. All the edited ROIs showed reduced inter-observer variability compared to manual segmentation. Our study shows that auto-segmentation plays a crucial role in harmonizing contouring practices globally. However, the clinical benefits of auto-segmentation software vary significantly across ROIs and between clinics. To maximize its potential, institution-specific commissioning is required to optimize the clinical benefits.
Page 77 of 1331324 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.