Sort by:
Page 73 of 1071070 results

Artificial Intelligence in Value-Based Health Care.

Shah R, Bozic KJ, Jayakumar P

pubmed logopapersMay 28 2025
Artificial intelligence (AI) presents new opportunities to advance value-based healthcare in orthopedic surgery through 3 potential mechanisms: agency, automation, and augmentation. AI may enhance patient agency through improved health literacy and remote monitoring while reducing costs through triage and reduction in specialist visits. In automation, AI optimizes operating room scheduling and streamlines administrative tasks, with documented cost savings and improved efficiency. For augmentation, AI has been shown to be accurate in diagnostic imaging interpretation and surgical planning, while enabling more precise outcome predictions and personalized treatment approaches. However, implementation faces substantial challenges, including resistance from healthcare professionals, technical barriers to data quality and privacy, and significant financial investments required for infrastructure. Success in healthcare AI integration requires careful attention to regulatory frameworks, data privacy, and clinical validation.

Single Domain Generalization for Alzheimer's Detection from 3D MRIs with Pseudo-Morphological Augmentations and Contrastive Learning

Zobia Batool, Huseyin Ozkan, Erchan Aptoula

arxiv logopreprintMay 28 2025
Although Alzheimer's disease detection via MRIs has advanced significantly thanks to contemporary deep learning models, challenges such as class imbalance, protocol variations, and limited dataset diversity often hinder their generalization capacity. To address this issue, this article focuses on the single domain generalization setting, where given the data of one domain, a model is designed and developed with maximal performance w.r.t. an unseen domain of distinct distribution. Since brain morphology is known to play a crucial role in Alzheimer's diagnosis, we propose the use of learnable pseudo-morphological modules aimed at producing shape-aware, anatomically meaningful class-specific augmentations in combination with a supervised contrastive learning module to extract robust class-specific representations. Experiments conducted across three datasets show improved performance and generalization capacity, especially under class imbalance and imaging protocol variations. The source code will be made available upon acceptance at https://github.com/zobia111/SDG-Alzheimer.

Multi-class classification of central and non-central geographic atrophy using Optical Coherence Tomography

Siraz, S., Kamanda, H., Gholami, S., Nabil, A. S., Ong, S. S. Y., Alam, M. N.

medrxiv logopreprintMay 28 2025
PurposeTo develop and validate deep learning (DL)-based models for classifying geographic atrophy (GA) subtypes using Optical Coherence Tomography (OCT) scans across four clinical classification tasks. DesignRetrospective comparative study evaluating three DL architectures on OCT data with two experimental approaches. Subjects455 OCT volumes (258 Central GA [CGA], 74 Non-Central GA [NCGA], 123 no GA [NGA]) from 104 patients at Atrium Health Wake Forest Baptist. For GA versus age-related macular degeneration (AMD) classification, we supplemented our dataset with AMD cases from four public repositories. MethodsWe implemented ResNet50, MobileNetV2, and Vision Transformer (ViT-B/16) architectures using two approaches: (1) utilizing all B-scans within each OCT volume and (2) selectively using B-scans containing foveal regions. Models were trained using transfer learning, standardized data augmentation, and patient-level data splitting (70:15:15 ratio) for training, validation, and testing. Main Outcome MeasuresArea under the receiver operating characteristic curve (AUC-ROC), F1 score, and accuracy for each classification task (CGA vs. NCGA, CGA vs. NCGA vs. NGA, GA vs. NGA, and GA vs. other forms of AMD). ResultsViT-B/16 consistently outperformed other architectures across all classification tasks. For CGA versus NCGA classification, ViT-B/16 achieved an AUC-ROC of 0.728{+/-}0.083 and accuracy of 0.831{+/-}0.006 using selective B-scans. In GA versus NGA classification, ViT-B/16 attained an AUC-ROC of 0.950{+/-}0.002 and accuracy of 0.873{+/-}0.012 with selective B-scans. All models demonstrated exceptional performance in distinguishing GA from other AMD forms (AUC-ROC>0.998). For multi-class classification, ViT-B/16 achieved an AUC-ROC of 0.873{+/-}0.003 and accuracy of 0.751{+/-}0.002 using selective B-scans. ConclusionsOur DL approach successfully classifies GA subtypes with clinically relevant accuracy. ViT-B/16 demonstrates superior performance due to its ability to capture spatial relationships between atrophic regions and the foveal center. Focusing on B-scans containing foveal regions improved diagnostic accuracy while reducing computational requirements, better aligning with clinical practice workflows.

Quantitative computed tomography imaging classification of cement dust-exposed patients-based Kolmogorov-Arnold networks.

Chau NK, Kim WJ, Lee CH, Chae KJ, Jin GY, Choi S

pubmed logopapersMay 27 2025
Occupational health assessment is critical for detecting respiratory issues caused by harmful exposures, such as cement dust. Quantitative computed tomography (QCT) imaging provides detailed insights into lung structure and function, enhancing the diagnosis of lung diseases. However, its high dimensionality poses challenges for traditional machine learning methods. In this study, Kolmogorov-Arnold networks (KANs) were used for the binary classification of QCT imaging data to assess respiratory conditions associated with cement dust exposure. The dataset comprised QCT images from 609 individuals, including 311 subjects exposed to cement dust and 298 healthy controls. We derived 141 QCT-based variables and employed KANs with two hidden layers of 15 and 8 neurons. The network parameters, including grid intervals, polynomial order, learning rate, and penalty strengths, were carefully fine-tuned. The performance of the model was assessed through various metrics, including accuracy, precision, recall, F1 score, specificity, and the Matthews Correlation Coefficient (MCC). A five-fold cross-validation was employed to enhance the robustness of the evaluation. SHAP analysis was applied to interpret the sensitive QCT features. The KAN model demonstrated consistently high performance across all metrics, with an average accuracy of 98.03 %, precision of 97.35 %, recall of 98.70 %, F1 score of 98.01 %, and specificity of 97.40 %. The MCC value further confirmed the robustness of the model in managing imbalanced datasets. The comparative analysis demonstrated that the KAN model outperformed traditional methods and other deep learning approaches, such as TabPFN, ANN, FT-Transformer, VGG19, MobileNets, ResNet101, XGBoost, SVM, random forest, and decision tree. SHAP analysis highlighted structural and functional lung features, such as airway geometry, wall thickness, and lung volume, as key predictors. KANs significantly improved the classification of QCT imaging data, enhancing early detection of cement dust-induced respiratory conditions. SHAP analysis supported model interpretability, enhancing its potential for clinical translation in occupational health assessments.

Development and validation of a CT-based radiomics machine learning model for differentiating immune-related interstitial pneumonia.

Luo T, Guo J, Xi J, Luo X, Fu Z, Chen W, Huang D, Chen K, Xiao Q, Wei S, Wang Y, Du H, Liu L, Cai S, Dong H

pubmed logopapersMay 27 2025
Immune checkpoint inhibitor-related interstitial pneumonia (CIP) poses a diagnostic challenge due to its radiographic similarity to other pneumonias. We developed a non-invasive model using CT imaging to differentiate CIP from other pneumonias (OTP). We analyzed CIP and OTP patients after the immunotherapy from five medical centers between 2020 and 2023, and randomly divided into training and validation in 7:3. A radiomics model was developed using random forest analysis. A new model was then built by combining independent risk factors for CIP. The models were evaluated using ROC, calibration, and decision curve analysis. A total of 238 patients with pneumonia following immunotherapy were included, with 116 CIP and 122 OTP. After random allocation, the training cohort included 166 patients, and the validation included 72 patients. A radiomics model composed of 11 radiomic features was established using the random forest method, with an AUC of 0.833 for the training cohort and 0.821 for the validation. Univariate and multivariate logistic regression analysis revealed significant differences in smoking history, radiotherapy history, and radiomics score between CIP and OTP (p < 0.05). A new model was constructed based on these three factors and a nomogram was drawn. This model showed good calibration and net benefit in both the training and validation cohorts, with AUCs of 0.872 and 0.860, respectively. Using the random forest method of machine learning, we successfully constructed a CT-based radiomics CIP differential diagnostic model that can accurately, non-invasively, and rapidly provide clinicians with etiological support for pneumonia diagnosis.

Modeling Brain Aging with Explainable Triamese ViT: Towards Deeper Insights into Autism Disorder.

Zhang Z, Aggarwal V, Angelov P, Jiang R

pubmed logopapersMay 27 2025
Machine learning, particularly through advanced imaging techniques such as three-dimensional Magnetic Resonance Imaging (MRI), has significantly improved medical diagnostics. This is especially critical for diagnosing complex conditions like Alzheimer's disease. Our study introduces Triamese-ViT, an innovative Tri-structure of Vision Transformers (ViTs) that incorporates a built-in interpretability function, it has structure-aware explainability that allows for the identification and visualization of key features or regions contributing to the prediction, integrates information from three perspectives to enhance brain age estimation. This method not only increases accuracy but also improves interoperability with existing techniques. When evaluated, Triamese-ViT demonstrated superior performance and produced insightful attention maps. We applied these attention maps to the analysis of natural aging and the diagnosis of Autism Spectrum Disorder (ASD). The results aligned with those from occlusion analysis, identifying the Cingulum, Rolandic Operculum, Thalamus, and Vermis as important regions in normal aging, and highlighting the Thalamus and Caudate Nucleus as key regions for ASD diagnosis.

Dual-energy CT combined with histogram parameters in the assessment of perineural invasion in colorectal cancer.

Wang Y, Tan H, Li S, Long C, Zhou B, Wang Z, Cao Y

pubmed logopapersMay 27 2025
The purpose is to evaluate the predictive value of dual-energy CT (DECT) combined with histogram parameters and a clinical prediction model for perineural invasion (PNI) in colorectal cancer (CRC). We retrospectively analyzed clinical and imaging data from 173 CRC patients who underwent preoperative DECT-enhanced scanning at two centers. Data from Qinghai University Affiliated Hospital (n = 120) were randomly divided into training and validation sets, while data from Lanzhou University Second Hospital (n = 53) served as the external validation set. Regions of interest (ROIs) were delineated to extract spectral and histogram parameters, and multivariate logistic regression identified optimal predictors. Six machine learning models-support vector machine (SVM), decision tree (DT), random forest (RF), logistic regression (LR), k-nearest neighbors (KNN), and extreme gradient boosting (XGBoost)-were constructed. Model performance and clinical utility were assessed using receiver operating characteristic (ROC) curves, calibration curves, and decision curve analysis (DCA). Four independent predictive factors were identified through multivariate analysis: entropy, CT40<sub>KeV</sub>, CEA, and skewness. Among the six classifier models, RF model demonstrated the best performance in the training set (AUC = 0.918, 95% CI: 0.862-0.969). In the validation set, RF outperformed other models (AUC = 0.885, 95% CI: 0.772-0.972). Notably, in the external validation set, the XGBoost model achieved the highest performance (AUC = 0.823, 95% CI: 0.672-0.945). Dual-energy CT-based combined with histogram parameters and clinical prediction modeling can be effectively used for preoperative noninvasive assessment of perineural invasion in colorectal cancer.

Prostate Cancer Screening with Artificial Intelligence-Enhanced Micro-Ultrasound: A Comparative Study with Traditional Methods

Muhammad Imran, Wayne G. Brisbane, Li-Ming Su, Jason P. Joseph, Wei Shao

arxiv logopreprintMay 27 2025
Background and objective: Micro-ultrasound (micro-US) is a novel imaging modality with diagnostic accuracy comparable to MRI for detecting clinically significant prostate cancer (csPCa). We investigated whether artificial intelligence (AI) interpretation of micro-US can outperform clinical screening methods using PSA and digital rectal examination (DRE). Methods: We retrospectively studied 145 men who underwent micro-US guided biopsy (79 with csPCa, 66 without). A self-supervised convolutional autoencoder was used to extract deep image features from 2D micro-US slices. Random forest classifiers were trained using five-fold cross-validation to predict csPCa at the slice level. Patients were classified as csPCa-positive if 88 or more consecutive slices were predicted positive. Model performance was compared with a classifier using PSA, DRE, prostate volume, and age. Key findings and limitations: The AI-based micro-US model and clinical screening model achieved AUROCs of 0.871 and 0.753, respectively. At a fixed threshold, the micro-US model achieved 92.5% sensitivity and 68.1% specificity, while the clinical model showed 96.2% sensitivity but only 27.3% specificity. Limitations include a retrospective single-center design and lack of external validation. Conclusions and clinical implications: AI-interpreted micro-US improves specificity while maintaining high sensitivity for csPCa detection. This method may reduce unnecessary biopsies and serve as a low-cost alternative to PSA-based screening. Patient summary: We developed an AI system to analyze prostate micro-ultrasound images. It outperformed PSA and DRE in detecting aggressive cancer and may help avoid unnecessary biopsies.

Decoding Breast Cancer in X-ray Mammograms: A Multi-Parameter Approach Using Fractals, Multifractals, and Structural Disorder Analysis

Santanu Maity, Mousa Alrubayan, Prabhakar Pradhan

arxiv logopreprintMay 27 2025
We explored the fractal and multifractal characteristics of breast mammogram micrographs to identify quantitative biomarkers associated with breast cancer progression. In addition to conventional fractal and multifractal analyses, we employed a recently developed fractal-functional distribution method, which transforms fractal measures into Gaussian distributions for more robust statistical interpretation. Given the sparsity of mammogram intensity data, we also analyzed how variations in intensity thresholds, used for binary transformations of the fractal dimension, follow unique trajectories that may serve as novel indicators of disease progression. Our findings demonstrate that fractal, multifractal, and fractal-functional parameters effectively differentiate between benign and cancerous tissue. Furthermore, the threshold-dependent behavior of intensity-based fractal measures presents distinct patterns in cancer cases. To complement these analyses, we applied the Inverse Participation Ratio (IPR) light localization technique to quantify structural disorder at the microscopic level. This multi-parametric approach, integrating spatial complexity and structural disorder metrics, offers a promising framework for enhancing the sensitivity and specificity of breast cancer detection.

Machine learning decision support model construction for craniotomy approach of pineal region tumors based on MRI images.

Chen Z, Chen Y, Su Y, Jiang N, Wanggou S, Li X

pubmed logopapersMay 27 2025
Pineal region tumors (PRTs) are rare but deep-seated brain tumors, and complete surgical resection is crucial for effective tumor treatment. The choice of surgical approach is often challenging due to the low incidence and deep location. This study aims to combine machine learning and deep learning algorithms with pre-operative MRI images to build a model for PRTs surgical approaches recommendation, striving to model clinical experience for practical reference and education. This study was a retrospective study which enrolled a total of 173 patients diagnosed with PRTs radiologically from our hospital. Three traditional surgical approaches of were recorded for prediction label. Clinical and VASARI related radiological information were selected for machine learning prediction model construction. And MRI images from axial, sagittal and coronal views of orientation were also used for deep learning craniotomy approach prediction model establishment and evaluation. 5 machine learning methods were applied to construct the predictive classifiers with the clinical and VASARI features and all methods could achieve area under the ROC (Receiver operating characteristic) curve (AUC) values over than 0.7. And also, 3 deep learning algorithms (ResNet-50, EfficientNetV2-m and ViT) were applied based on MRI images from different orientations. EfficientNetV2-m achieved the highest AUC value of 0.89, demonstrating a significant high performance of prediction. And class activation mapping was used to reveal that the tumor itself and its surrounding relations are crucial areas for model decision-making. In our study, we used machine learning and deep learning to construct surgical approach recommendation models. Deep learning could achieve high performance of prediction and provide efficient and personalized decision support tools for PRTs surgical approach. Not applicable.
Page 73 of 1071070 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.