Sort by:
Page 12 of 33328 results

A Novel Two-step Classification Approach for Differentiating Bone Metastases From Benign Bone Lesions in SPECT/CT Imaging.

Xie W, Wang X, Liu M, Mai L, Shangguan H, Pan X, Zhan Y, Zhang J, Wu X, Dai Y, Pei Y, Zhang G, Yao Z, Wang Z

pubmed logopapersJul 2 2025
This study aims to develop and validate a novel two-step deep learning framework for the automated detection, segmentation, and classification of bone metastases in SPECT/CT imaging, accurately distinguishing malignant from benign lesions to improve early diagnosis and facilitate personalized treatment planning. A segmentation model, BL-Seg, was developed to automatically segment lesion regions in SPECT/CT images, utilizing a multi-scale attention fusion module and a triple attention mechanism to capture metabolic variations and refine lesion boundaries. A radiomics-based ensemble learning classifier was subsequently applied to integrate metabolic and texture features for benign-malignant differentiation. The framework was trained and evaluated using a proprietary dataset of SPECT/CT images collected from our institution. Performance metrics, including Dice coefficient, sensitivity, specificity, and AUC, were compared against conventional methods. The study utilized a dataset of SPECT/CT cases from our institution, divided into training and test sets acquired on Siemens SPECT/CT scanners with minor protocol differences. BL-Seg achieved a Dice coefficient of 0.8797, surpassing existing segmentation models. The classification model yielded an AUC of 0.8502, with improved sensitivity and specificity compared to traditional approaches. The proposed framework, with BL-Seg's automated lesion segmentation, demonstrates superior accuracy in detecting, segmenting, and classifying bone metastases, offering a robust tool for early diagnosis and personalized treatment planning in metastatic bone disease.

Comparison of CNNs and Transformer Models in Diagnosing Bone Metastases in Bone Scans Using Grad-CAM.

Pak S, Son HJ, Kim D, Woo JY, Yang I, Hwang HS, Rim D, Choi MS, Lee SH

pubmed logopapersJul 1 2025
Convolutional neural networks (CNNs) have been studied for detecting bone metastases on bone scans; however, the application of ConvNeXt and transformer models has not yet been explored. This study aims to evaluate the performance of various deep learning models, including the ConvNeXt and transformer models, in diagnosing metastatic lesions from bone scans. We retrospectively analyzed bone scans from patients with cancer obtained at 2 institutions: the training and validation sets (n=4626) were from Hospital 1 and the test set (n=1428) was from Hospital 2. The deep learning models evaluated included ResNet18, the Data-Efficient Image Transformer (DeiT), the Vision Transformer (ViT Large 16), the Swin Transformer (Swin Base), and ConvNeXt Large. Gradient-weighted class activation mapping (Grad-CAM) was used for visualization. Both the validation set and the test set demonstrated that the ConvNeXt large model (0.969 and 0.885, respectively) exhibited the best performance, followed by the Swin Base model (0.965 and 0.840, respectively), both of which significantly outperformed ResNet (0.892 and 0.725, respectively). Subgroup analyses revealed that all the models demonstrated greater diagnostic accuracy for patients with polymetastasis compared with those with oligometastasis. Grad-CAM visualization revealed that the ConvNeXt Large model focused more on identifying local lesions, whereas the Swin Base model focused on global areas such as the axial skeleton and pelvis. Compared with traditional CNN and transformer models, the ConvNeXt model demonstrated superior diagnostic performance in detecting bone metastases from bone scans, especially in cases of polymetastasis, suggesting its potential in medical image analysis.

Automated classification of chondroid tumor using 3D U-Net and radiomics with deep features.

Le Dinh T, Lee S, Park H, Lee S, Choi H, Chun KS, Jung JY

pubmed logopapersJul 1 2025
Classifying chondroid tumors is an essential step for effective treatment planning. Recently, with the advances in computer-aided diagnosis and the increasing availability of medical imaging data, automated tumor classification using deep learning shows promise in assisting clinical decision-making. In this study, we propose a hybrid approach that integrates deep learning and radiomics for chondroid tumor classification. First, we performed tumor segmentation using the nnUNetv2 framework, which provided three-dimensional (3D) delineation of tumor regions of interest (ROIs). From these ROIs, we extracted a set of radiomics features and deep learning-derived features. After feature selection, we identified 15 radiomics and 15 deep features to build classification models. We developed 5 machine learning classifiers including Random Forest, XGBoost, Gradient Boosting, LightGBM, and CatBoost for the classification models. The approach integrating features from radiomics, ROI-originated deep learning features, and clinical variables yielded the best overall classification results. Among the classifiers, CatBoost classifier achieved the highest accuracy of 0.90 (95% CI 0.90-0.93), a weighted kappa of 0.85, and an AUC of 0.91. These findings highlight the potential of integrating 3D U-Net-assisted segmentation with radiomics and deep learning features to improve classification of chondroid tumors.

SpineMamba: Enhancing 3D spinal segmentation in clinical imaging through residual visual Mamba layers and shape priors.

Zhang Z, Liu T, Fan G, Li N, Li B, Pu Y, Feng Q, Zhou S

pubmed logopapersJul 1 2025
Accurate segmentation of three-dimensional (3D) clinical medical images is critical for the diagnosis and treatment of spinal diseases. However, the complexity of spinal anatomy and the inherent uncertainties of current imaging technologies pose significant challenges for the semantic segmentation of spinal images. Although convolutional neural networks (CNNs) and Transformer-based models have achieved remarkable progress in spinal segmentation, their limitations in modeling long-range dependencies hinder further improvements in segmentation accuracy. To address these challenges, we propose a novel framework, SpineMamba, which incorporates a residual visual Mamba layer capable of effectively capturing and modeling the deep semantic features and long-range spatial dependencies in 3D spinal data. To further enhance the structural semantic understanding of the vertebrae, we also propose a novel spinal shape prior module that captures specific anatomical information about the spine from medical images, significantly enhancing the model's ability to extract structural semantic information of the vertebrae. Extensive comparative and ablation experiments across three datasets demonstrate that SpineMamba outperforms existing state-of-the-art models. On two computed tomography (CT) datasets, the average Dice similarity coefficients achieved are 94.40±4% and 88.28±3%, respectively, while on a magnetic resonance (MR) dataset, the model achieves a Dice score of 86.95±10%. Notably, SpineMamba surpasses the widely recognized nnU-Net in segmentation accuracy, with a maximum improvement of 3.63 percentage points. These results highlight the precision, robustness, and exceptional generalization capability of SpineMamba.

A Workflow-Efficient Approach to Pre- and Post-Operative Assessment of Weight-Bearing Three-Dimensional Knee Kinematics.

Banks SA, Yildirim G, Jachode G, Cox J, Anderson O, Jensen A, Cole JD, Kessler O

pubmed logopapersJul 1 2025
Knee kinematics during daily activities reflect disease severity preoperatively and are associated with clinical outcomes after total knee arthroplasty (TKA). It is widely believed that measured kinematics would be useful for preoperative planning and postoperative assessment. Despite decades-long interest in measuring three-dimensional (3D) knee kinematics, no methods are available for routine, practical clinical examinations. We report a clinically practical method utilizing machine-learning-enhanced software and upgraded C-arm fluoroscopy for the accurate and time-efficient measurement of pre-TKA and post-TKA 3D dynamic knee kinematics. Using a common C-arm with an upgraded detector and software, we performed an 8-s horizontal sweeping pulsed fluoroscopic scan of the weight-bearing knee joint. The patient's knee was then imaged using pulsed C-arm fluoroscopy while performing standing, kneeling, squatting, stair, chair, and gait motion activities. We used limited-arc cone-beam reconstruction methods to create 3D models of the femur and tibia/fibula bones with implants, which can then be used to perform model-image registration to quantify the 3D knee kinematics. The proposed protocol can be accomplished by an individual radiology technician in ten minutes and does not require additional equipment beyond a step and stool. The image analysis can be performed by a computer onboard the upgraded c-arm or in the cloud, before loading the examination results into the Picture Archiving and Communication System and Electronic Medical Record systems. Weight-bearing kinematics affects knee function pre- and post-TKA. It has long been exclusively the domain of researchers to make such measurements. We present an approach that leverages common, but digitally upgraded, imaging hardware and software to implement an efficient examination protocol for accurately assessing 3D knee kinematics. With these capabilities, it will be possible to include dynamic 3D knee kinematics as a component of the routine clinical workup for patients who have diseased or replaced knees.

Association between muscle mass assessed by an artificial intelligence-based ultrasound imaging system and quality of life in patients with cancer-related malnutrition.

de Luis D, Cebria A, Primo D, Izaola O, Godoy EJ, Gomez JJL

pubmed logopapersJul 1 2025
Emerging evidence suggests that diminished skeletal muscle mass is associated with lower health-related quality of life (HRQOL) in individuals with cancer. There are no studies that we know of in the literature that use ultrasound system to evaluate muscle mass and its relationship with HRQOL. The aim of our study was to evaluate the relationship between HRQOL determined by the EuroQol-5D tool and muscle mass determined by an artificial intelligence-based ultrasound system at the rectus femoris (RF) level in outpatients with cancer. Anthropometric data by bioimpedance (BIA), muscle mass by ultrasound by an artificial intelligence-based at the RF level, biochemistry determination, dynamometry and HRQOL were measured. A total of 158 patients with cancer were included with a mean age of 70.6 ±9.8 years. The mean body mass index was 24.4 ± 4.1 kg/m<sup>2</sup> with a mean body weight of 63.9 ± 11.7 kg (38% females and 62% males). A total of 57 patients had a severe degree of malnutrition (36.1%). The distribution of the location of the tumors was 66 colon-rectum cancer (41.7%), 56 esophageal-stomach cancer (35.4%), 16 pancreatic cancer (10.1%), and 20.2% other locations. A positive correlation cross-sectional area (CSA), muscle thickness (MT), pennation angle, (BIA) parameters, and muscle strength was detected. Patients in the groups below the median for the visual scale and the EuroQol-5D index had lower CSA and MT, BIA, and muscle strength values. CSA (beta 4.25, 95% CI 2.03-6.47) remained in the multivariate model as dependent variable (visual scale) and muscle strength (beta 0.008, 95% CI 0.003-0.14) with EuroQol-5D index. Muscle strength and pennation angle by US were associated with better score in dimensions of mobility, self-care, and daily activities. CSA, MT, and pennation angle of RF determined by an artificial intelligence-based muscle ultrasound system in outpatients with cancer were related to HRQOL determined by EuroQol-5D.

Cascade learning in multi-task encoder-decoder networks for concurrent bone segmentation and glenohumeral joint clinical assessment in shoulder CT scans.

Marsilio L, Marzorati D, Rossi M, Moglia A, Mainardi L, Manzotti A, Cerveri P

pubmed logopapersJul 1 2025
Osteoarthritis is a degenerative condition that affects bones and cartilage, often leading to structural changes, including osteophyte formation, bone density loss, and the narrowing of joint spaces. Over time, this process may disrupt the glenohumeral (GH) joint functionality, requiring a targeted treatment. Various options are available to restore joint functions, ranging from conservative management to surgical interventions, depending on the severity of the condition. This work introduces an innovative deep learning framework to process shoulder CT scans. It features the semantic segmentation of the proximal humerus and scapula, the 3D reconstruction of bone surfaces, the identification of the GH joint region, and the staging of three common osteoarthritic-related conditions: osteophyte formation (OS), GH space reduction (JS), and humeroscapular alignment (HSA). Each condition was stratified into multiple severity stages, offering a comprehensive analysis of shoulder bone structure pathology. The pipeline comprised two cascaded CNN architectures: 3D CEL-UNet for segmentation and 3D Arthro-Net for threefold classification. A retrospective dataset of 571 CT scans featuring patients with various degrees of GH osteoarthritic-related pathologies was used to train, validate, and test the pipeline. Root mean squared error and Hausdorff distance median values for 3D reconstruction were 0.22 mm and 1.48 mm for the humerus and 0.24 mm and 1.48 mm for the scapula, outperforming state-of-the-art architectures and making it potentially suitable for a PSI-based shoulder arthroplasty preoperative plan context. The classification accuracy for OS, JS, and HSA consistently reached around 90% across all three categories. The computational time for the entire inference pipeline was less than 15 s, showcasing the framework's efficiency and compatibility with orthopedic radiology practice. The achieved reconstruction and classification accuracy, combined with the rapid processing time, represent a promising advancement towards the medical translation of artificial intelligence tools. This progress aims to streamline the preoperative planning pipeline, delivering high-quality bone surfaces and supporting surgeons in selecting the most suitable surgical approach according to the unique patient joint conditions.

Identifying Primary Sites of Spinal Metastases: Expert-Derived Features vs. ResNet50 Model Using Nonenhanced MRI.

Liu K, Ning J, Qin S, Xu J, Hao D, Lang N

pubmed logopapersJul 1 2025
The spinal column is a frequent site for metastases, affecting over 30% of solid tumor patients. Identifying the primary tumor is essential for guiding clinical decisions but often requires resource-intensive diagnostics. To develop and validate artificial intelligence (AI) models using noncontrast MRI to identify primary sites of spinal metastases, aiming to enhance diagnostic efficiency. Retrospective. A total of 514 patients with pathologically confirmed spinal metastases (mean age, 59.3 ± 11.2 years; 294 males) were included, split into a development set (360) and a test set (154). Noncontrast sagittal MRI sequences (T1-weighted, T2-weighted, and fat-suppressed T2) were acquired using 1.5 T and 3 T scanners. Two models were evaluated for identifying primary sites of spinal metastases: the expert-derived features (EDF) model using radiologist-identified imaging features and a ResNet50-based deep learning (DL) model trained on noncontrast MRI. Performance was assessed using accuracy, precision, recall, F1 score, and the area under the receiver operating characteristic curve (ROC-AUC) for top-1, top-2, and top-3 indicators. Statistical analyses included Shapiro-Wilk, t tests, Mann-Whitney U test, and chi-squared tests. ROC-AUCs were compared via DeLong tests, with 95% confidence intervals from 1000 bootstrap replications and significance at P < 0.05. The EDF model outperformed the DL model in top-3 accuracy (0.88 vs. 0.69) and AUC (0.80 vs. 0.71). Subgroup analysis showed superior EDF performance for common sites like lung and kidney (e.g., kidney F1: 0.94 vs. 0.76), while the DL model had higher recall for rare sites like thyroid (0.80 vs. 0.20). SHapley Additive exPlanations (SHAP) analysis identified sex (SHAP: -0.57 to 0.68), age (-0.48 to 0.98), T1WI signal intensity (-0.29 to 0.72), and pathological fractures (-0.76 to 0.25) as key features. AI techniques using noncontrast MRI improve diagnostic efficiency for spinal metastases. The EDF model outperformed the DL model, showing greater clinical potential. Spinal metastases, or cancer spreading to the spine, are common in patients with advanced cancer, often requiring extensive tests to determine the original tumor site. Our study explored whether artificial intelligence could make this process faster and more accurate using noncontrast MRI scans. We tested two methods: one based on radiologists' expertise in identifying imaging features and another using a deep learning model trained to analyze MRI images. The expert-based method was more reliable, correctly identifying the tumor site in 88% of cases when considering the top three likely diagnoses. This approach may help doctors reduce diagnostic time and improve patient care. 3 TECHNICAL EFFICACY: Stage 2.

Automated vertebrae identification and segmentation with structural uncertainty analysis in longitudinal CT scans of patients with multiple myeloma.

Madzia-Madzou DK, Jak M, de Keizer B, Verlaan JJ, Minnema MC, Gilhuijs K

pubmed logopapersJul 1 2025
Optimize deep learning-based vertebrae segmentation in longitudinal CT scans of multiple myeloma patients using structural uncertainty analysis. Retrospective CT scans from 474 multiple myeloma patients were divided into train (179 patients, 349 scans, 2005-2011) and test cohort (295 patients, 671 scans, 2012-2020). An enhanced segmentation pipeline was developed on the train cohort. It integrated vertebrae segmentation using an open-source deep learning method (Payer's) with a post-hoc structural uncertainty analysis. This analysis identified inconsistencies, automatically correcting them or flagging uncertain regions for human review. Segmentation quality was assessed through vertebral shape analysis using topology. Metrics included 'identification rate', 'longitudinal vertebral match rate', 'success rate' and 'series success rate' and evaluated across age/sex subgroups. Statistical analysis included McNemar and Wilcoxon signed-rank tests, with p < 0.05 indicating significant improvement. Payer's method achieved an identification rate of 95.8% and success rate of 86.7%. The proposed pipeline automatically improved these metrics to 98.8% and 96.0%, respectively (p < 0.001). Additionally, 3.6% of scans were marked for human inspection, increasing the success rate from 96.0% to 98.8% (p < 0.001). The vertebral match rate increased from 97.0% to 99.7% (p < 0.001), and the series success rate from 80.0% to 95.4% (p < 0.001). Subgroup analysis showed more consistent performance across age and sex groups. The proposed pipeline significantly outperforms Payer's method, enhancing segmentation accuracy and reducing longitudinal matching errors while minimizing evaluation workload. Its uncertainty analysis ensures robust performance, making it a valuable tool for longitudinal studies in multiple myeloma.

Visualizing Preosteoarthritis: Updates on UTE-Based Compositional MRI and Deep Learning Algorithms.

Sun D, Wu G, Zhang W, Gharaibeh NM, Li X

pubmed logopapersJul 1 2025
Osteoarthritis (OA) is heterogeneous and involves structural changes in the whole joint, such as cartilage, meniscus/labrum, ligaments, and tendons, mainly with short T2 relaxation times. Detecting OA before the onset of irreversible changes is crucial for early proactive management and limit growing disease burden. The more recent advanced quantitative imaging techniques and deep learning (DL) algorithms in musculoskeletal imaging have shown great potential for visualizing "pre-OA." In this review, we first focus on ultrashort echo time-based magnetic resonance imaging (MRI) techniques for direct visualization as well as quantitative morphological and compositional assessment of both short- and long-T2 musculoskeletal tissues, and second explore how DL revolutionize the way of MRI analysis (eg, automatic tissue segmentation and extraction of quantitative image biomarkers) and the classification, prediction, and management of OA. PLAIN LANGUAGE SUMMARY: Detecting osteoarthritis (OA) before the onset of irreversible changes is crucial for early proactive management. OA is heterogeneous and involves structural changes in the whole joint, such as cartilage, meniscus/labrum, ligaments, and tendons, mainly with short T2 relaxation times. Ultrashort echo time-based magnetic resonance imaging (MRI), in particular, enables direct visualization and quantitative compositional assessment of short-T2 tissues. Deep learning is revolutionizing the way of MRI analysis (eg, automatic tissue segmentation and extraction of quantitative image biomarkers) and the detection, classification, and prediction of disease. They together have made further advances toward identification of imaging biomarkers/features for pre-OA. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 2.
Page 12 of 33328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.