Sort by:
Page 216 of 2922917 results

Incorporating organ deformation in biological modeling and patient outcome study for permanent prostate brachytherapy.

To S, Mavroidis P, Chen RC, Wang A, Royce T, Tan X, Zhu T, Lian J

pubmed logopapersMay 28 2025
Permanent prostate brachytherapy has inherent intraoperative organ deformation due to the inflatable trans-rectal ultrasound probe cover. Since the majority of the dose is delivered postoperatively with no deformation, the dosimetry approved at the time of implant may not accurately represent the dose delivered to the target and organs at risk. We aimed to evaluate the biological effect of the prostate deformation and its correlation with patient-reported outcomes. We prospectively acquired ultrasound images of the prostate pre- and postprobe cover inflation for 27 patients undergoing I-125 seed implant. The coordinates of implanted seeds from approved clinical plan were transferred to deformation-corrected prostate to simulate the actual dosimetry using a machine learning-based deformable image registration. The DVHs of both sets of plans were reduced to biologically effective dose (BED) distribution and subsequently to Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) metrics. The change in fourteen patient-reported rectal and urinary symptoms between pretreatment to 6 months post-op time points were correlated with the TCP and NTCP metrics using the area under the curve (AUC) and odds ratio (OR). Between the clinical and the deformation corrected research plans, the mean TCP decreased by 9.4% (p < 0.01), whereas mean NTCP of rectum decreased by 10.3% and that of urethra increased by 16.3%, respectively (p < 0.01). For the diarrhea symptom, the deformation corrected research plans showed AUC=0.75 and OR = 8.9 (1.3-58.8) for the threshold NTCP>20%, while the clinical plan showed AUC=0.56 and OR = 1.4 (0.2 to 9.0). For the symptom of urinary control, the deformation corrected research plans showed AUC = 0.70, OR = 6.9 (0.6 to 78.0) for the threshold of NTCP>15%, while the clinical plan showed AUC = 0.51 and no positive OR. Taking organ deformation into consideration, clinical brachytherapy plans showed worse tumor coverage, worse urethra sparing but better rectal sparing. The deformation corrected research plans showed a stronger correlation with the patient-reported outcome than the clinical plans for the symptoms of diarrhea and urinary control.

An AI system for continuous knee osteoarthritis severity grading: An anomaly detection inspired approach with few labels.

Belton N, Lawlor A, Curran KM

pubmed logopapersMay 28 2025
The diagnostic accuracy and subjectivity of existing Knee Osteoarthritis (OA) ordinal grading systems has been a subject of on-going debate and concern. Existing automated solutions are trained to emulate these imperfect systems, whilst also being reliant on large annotated databases for fully-supervised training. This work proposes a three stage approach for automated continuous grading of knee OA that is built upon the principles of Anomaly Detection (AD); learning a robust representation of healthy knee X-rays and grading disease severity based on its distance to the centre of normality. In the first stage, SS-FewSOME is proposed, a self-supervised AD technique that learns the 'normal' representation, requiring only examples of healthy subjects and <3% of the labels that existing methods require. In the second stage, this model is used to pseudo label a subset of unlabelled data as 'normal' or 'anomalous', followed by denoising of pseudo labels with CLIP. The final stage involves retraining on labelled and pseudo labelled data using the proposed Dual Centre Representation Learning (DCRL) which learns the centres of two representation spaces; normal and anomalous. Disease severity is then graded based on the distance to the learned centres. The proposed methodology outperforms existing techniques by margins of up to 24% in terms of OA detection and the disease severity scores correlate with the Kellgren-Lawrence grading system at the same level as human expert performance. Code available at https://github.com/niamhbelton/SS-FewSOME_Disease_Severity_Knee_Osteoarthritis.

Artificial Intelligence in Value-Based Health Care.

Shah R, Bozic KJ, Jayakumar P

pubmed logopapersMay 28 2025
Artificial intelligence (AI) presents new opportunities to advance value-based healthcare in orthopedic surgery through 3 potential mechanisms: agency, automation, and augmentation. AI may enhance patient agency through improved health literacy and remote monitoring while reducing costs through triage and reduction in specialist visits. In automation, AI optimizes operating room scheduling and streamlines administrative tasks, with documented cost savings and improved efficiency. For augmentation, AI has been shown to be accurate in diagnostic imaging interpretation and surgical planning, while enabling more precise outcome predictions and personalized treatment approaches. However, implementation faces substantial challenges, including resistance from healthcare professionals, technical barriers to data quality and privacy, and significant financial investments required for infrastructure. Success in healthcare AI integration requires careful attention to regulatory frameworks, data privacy, and clinical validation.

Single Domain Generalization for Alzheimer's Detection from 3D MRIs with Pseudo-Morphological Augmentations and Contrastive Learning

Zobia Batool, Huseyin Ozkan, Erchan Aptoula

arxiv logopreprintMay 28 2025
Although Alzheimer's disease detection via MRIs has advanced significantly thanks to contemporary deep learning models, challenges such as class imbalance, protocol variations, and limited dataset diversity often hinder their generalization capacity. To address this issue, this article focuses on the single domain generalization setting, where given the data of one domain, a model is designed and developed with maximal performance w.r.t. an unseen domain of distinct distribution. Since brain morphology is known to play a crucial role in Alzheimer's diagnosis, we propose the use of learnable pseudo-morphological modules aimed at producing shape-aware, anatomically meaningful class-specific augmentations in combination with a supervised contrastive learning module to extract robust class-specific representations. Experiments conducted across three datasets show improved performance and generalization capacity, especially under class imbalance and imaging protocol variations. The source code will be made available upon acceptance at https://github.com/zobia111/SDG-Alzheimer.

Multi-class classification of central and non-central geographic atrophy using Optical Coherence Tomography

Siraz, S., Kamanda, H., Gholami, S., Nabil, A. S., Ong, S. S. Y., Alam, M. N.

medrxiv logopreprintMay 28 2025
PurposeTo develop and validate deep learning (DL)-based models for classifying geographic atrophy (GA) subtypes using Optical Coherence Tomography (OCT) scans across four clinical classification tasks. DesignRetrospective comparative study evaluating three DL architectures on OCT data with two experimental approaches. Subjects455 OCT volumes (258 Central GA [CGA], 74 Non-Central GA [NCGA], 123 no GA [NGA]) from 104 patients at Atrium Health Wake Forest Baptist. For GA versus age-related macular degeneration (AMD) classification, we supplemented our dataset with AMD cases from four public repositories. MethodsWe implemented ResNet50, MobileNetV2, and Vision Transformer (ViT-B/16) architectures using two approaches: (1) utilizing all B-scans within each OCT volume and (2) selectively using B-scans containing foveal regions. Models were trained using transfer learning, standardized data augmentation, and patient-level data splitting (70:15:15 ratio) for training, validation, and testing. Main Outcome MeasuresArea under the receiver operating characteristic curve (AUC-ROC), F1 score, and accuracy for each classification task (CGA vs. NCGA, CGA vs. NCGA vs. NGA, GA vs. NGA, and GA vs. other forms of AMD). ResultsViT-B/16 consistently outperformed other architectures across all classification tasks. For CGA versus NCGA classification, ViT-B/16 achieved an AUC-ROC of 0.728{+/-}0.083 and accuracy of 0.831{+/-}0.006 using selective B-scans. In GA versus NGA classification, ViT-B/16 attained an AUC-ROC of 0.950{+/-}0.002 and accuracy of 0.873{+/-}0.012 with selective B-scans. All models demonstrated exceptional performance in distinguishing GA from other AMD forms (AUC-ROC>0.998). For multi-class classification, ViT-B/16 achieved an AUC-ROC of 0.873{+/-}0.003 and accuracy of 0.751{+/-}0.002 using selective B-scans. ConclusionsOur DL approach successfully classifies GA subtypes with clinically relevant accuracy. ViT-B/16 demonstrates superior performance due to its ability to capture spatial relationships between atrophic regions and the foveal center. Focusing on B-scans containing foveal regions improved diagnostic accuracy while reducing computational requirements, better aligning with clinical practice workflows.

Quantitative computed tomography imaging classification of cement dust-exposed patients-based Kolmogorov-Arnold networks.

Chau NK, Kim WJ, Lee CH, Chae KJ, Jin GY, Choi S

pubmed logopapersMay 27 2025
Occupational health assessment is critical for detecting respiratory issues caused by harmful exposures, such as cement dust. Quantitative computed tomography (QCT) imaging provides detailed insights into lung structure and function, enhancing the diagnosis of lung diseases. However, its high dimensionality poses challenges for traditional machine learning methods. In this study, Kolmogorov-Arnold networks (KANs) were used for the binary classification of QCT imaging data to assess respiratory conditions associated with cement dust exposure. The dataset comprised QCT images from 609 individuals, including 311 subjects exposed to cement dust and 298 healthy controls. We derived 141 QCT-based variables and employed KANs with two hidden layers of 15 and 8 neurons. The network parameters, including grid intervals, polynomial order, learning rate, and penalty strengths, were carefully fine-tuned. The performance of the model was assessed through various metrics, including accuracy, precision, recall, F1 score, specificity, and the Matthews Correlation Coefficient (MCC). A five-fold cross-validation was employed to enhance the robustness of the evaluation. SHAP analysis was applied to interpret the sensitive QCT features. The KAN model demonstrated consistently high performance across all metrics, with an average accuracy of 98.03 %, precision of 97.35 %, recall of 98.70 %, F1 score of 98.01 %, and specificity of 97.40 %. The MCC value further confirmed the robustness of the model in managing imbalanced datasets. The comparative analysis demonstrated that the KAN model outperformed traditional methods and other deep learning approaches, such as TabPFN, ANN, FT-Transformer, VGG19, MobileNets, ResNet101, XGBoost, SVM, random forest, and decision tree. SHAP analysis highlighted structural and functional lung features, such as airway geometry, wall thickness, and lung volume, as key predictors. KANs significantly improved the classification of QCT imaging data, enhancing early detection of cement dust-induced respiratory conditions. SHAP analysis supported model interpretability, enhancing its potential for clinical translation in occupational health assessments.

Development and validation of a CT-based radiomics machine learning model for differentiating immune-related interstitial pneumonia.

Luo T, Guo J, Xi J, Luo X, Fu Z, Chen W, Huang D, Chen K, Xiao Q, Wei S, Wang Y, Du H, Liu L, Cai S, Dong H

pubmed logopapersMay 27 2025
Immune checkpoint inhibitor-related interstitial pneumonia (CIP) poses a diagnostic challenge due to its radiographic similarity to other pneumonias. We developed a non-invasive model using CT imaging to differentiate CIP from other pneumonias (OTP). We analyzed CIP and OTP patients after the immunotherapy from five medical centers between 2020 and 2023, and randomly divided into training and validation in 7:3. A radiomics model was developed using random forest analysis. A new model was then built by combining independent risk factors for CIP. The models were evaluated using ROC, calibration, and decision curve analysis. A total of 238 patients with pneumonia following immunotherapy were included, with 116 CIP and 122 OTP. After random allocation, the training cohort included 166 patients, and the validation included 72 patients. A radiomics model composed of 11 radiomic features was established using the random forest method, with an AUC of 0.833 for the training cohort and 0.821 for the validation. Univariate and multivariate logistic regression analysis revealed significant differences in smoking history, radiotherapy history, and radiomics score between CIP and OTP (p < 0.05). A new model was constructed based on these three factors and a nomogram was drawn. This model showed good calibration and net benefit in both the training and validation cohorts, with AUCs of 0.872 and 0.860, respectively. Using the random forest method of machine learning, we successfully constructed a CT-based radiomics CIP differential diagnostic model that can accurately, non-invasively, and rapidly provide clinicians with etiological support for pneumonia diagnosis.

A Left Atrial Positioning System to Enable Follow-Up and Cohort Studies.

Mehringer NJ, McVeigh ER

pubmed logopapersMay 27 2025
We present a new algorithm to automatically convert 3-dimensional left atrium surface meshes into a standard 2-dimensional space: a Left Atrial Positioning System (LAPS). Forty-five contrast-enhanced 4- dimensional computed tomography datasets were collected from 30 subjects. The left atrium volume was segmented using a trained neural network and converted into a surface mesh. LAPS coordinates were calculated on each mesh by computing lines of longitude and latitude on the surface of the mesh with reference to the center of the posterior wall and the mitral valve. LAPS accuracy was evaluated with one-way transfer of coordinates from a template mesh to a synthetic ground truth, which was created by registering the template mesh and pre-calculated LAPS coordinates to a target mesh. The Euclidian distance error was measured between each test node and its ground truth location. The median point transfer error was 2.13 mm between follow-up scans of the same subject (n = 15) and 3.99 mm between different subjects (n = 30). The left atrium was divided into 24 anatomic regions and represented on a 2D square diagram. The Left Atrial Positioning System is fully automatic, accurate, robust to anatomic variation, and has flexible visualization for mapping data in the left atrium. This provides a framework for comparing regional LA surface data values in both follow-up and cohort studies.

Modeling Brain Aging with Explainable Triamese ViT: Towards Deeper Insights into Autism Disorder.

Zhang Z, Aggarwal V, Angelov P, Jiang R

pubmed logopapersMay 27 2025
Machine learning, particularly through advanced imaging techniques such as three-dimensional Magnetic Resonance Imaging (MRI), has significantly improved medical diagnostics. This is especially critical for diagnosing complex conditions like Alzheimer's disease. Our study introduces Triamese-ViT, an innovative Tri-structure of Vision Transformers (ViTs) that incorporates a built-in interpretability function, it has structure-aware explainability that allows for the identification and visualization of key features or regions contributing to the prediction, integrates information from three perspectives to enhance brain age estimation. This method not only increases accuracy but also improves interoperability with existing techniques. When evaluated, Triamese-ViT demonstrated superior performance and produced insightful attention maps. We applied these attention maps to the analysis of natural aging and the diagnosis of Autism Spectrum Disorder (ASD). The results aligned with those from occlusion analysis, identifying the Cingulum, Rolandic Operculum, Thalamus, and Vermis as important regions in normal aging, and highlighting the Thalamus and Caudate Nucleus as key regions for ASD diagnosis.

ToPoMesh: accurate 3D surface reconstruction from CT volumetric data via topology modification.

Chen J, Zhu Q, Xie B, Li T

pubmed logopapersMay 27 2025
Traditional computed tomography (CT) methods for 3D reconstruction face resolution limitations and require time-consuming post-processing workflows. While deep learning techniques improve the accuracy of segmentation, traditional voxel-based segmentation and surface reconstruction pipelines tend to introduce artifacts such as disconnected regions, topological inconsistencies, and stepped distortions. To overcome these challenges, we propose ToPoMesh, an end-to-end 3D mesh reconstruction deep learning framework for direct reconstruction of high-fidelity surface meshes from CT volume data. To address the existing problems, our approach introduces three core innovations: (1) accurate local and global shape modeling by preserving and enhancing local feature information through residual connectivity and self-attention mechanisms in graph convolutional networks; (2) an adaptive variant density (Avd) mesh de-pooling strategy, which dynamically optimizes the vertex distribution; (3) a topology modification module that iteratively prunes the error surfaces and boundary smoothing via variable regularity terms to obtain finer mesh surfaces. Experiments on the LiTS, MSD pancreas tumor, MSD hippocampus, and MSD spleen datasets demonstrate that ToPoMesh outperforms state-of-the-art methods. Quantitative evaluations demonstrate a 57.4% reduction in Chamfer distance (liver) and a 0.47% improvement in F-score compared to end-to-end 3D reconstruction methods, while qualitative results confirm enhanced fidelity for thin structures and complex anatomical topologies versus segmentation frameworks. Importantly, our method eliminates the need for manual post-processing, realizes the ability to reconstruct 3D meshes from images, and can provide precise guidance for surgical planning and diagnosis.
Page 216 of 2922917 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.