Sort by:
Page 35 of 42420 results

Generating Synthetic T2*-Weighted Gradient Echo Images of the Knee with an Open-source Deep Learning Model.

Vrettos K, Vassalou EE, Vamvakerou G, Karantanas AH, Klontzas ME

pubmed logopapersJun 1 2025
Routine knee MRI protocols for 1.5 T and 3 T scanners, do not include T2*-w gradient echo (T2*W) images, which are useful in several clinical scenarios such as the assessment of cartilage, synovial blooming (deposition of hemosiderin), chondrocalcinosis and the evaluation of the physis in pediatric patients. Herein, we aimed to develop an open-source deep learning model that creates synthetic T2*W images of the knee using fat-suppressed intermediate-weighted images. A cycleGAN model was trained with 12,118 sagittal knee MR images and tested on an independent set of 2996 images. Diagnostic interchangeability of synthetic T2*W images was assessed against a series of findings. Voxel intensity of four tissues was evaluated with Bland-Altman plots. Image quality was assessed with the use of root mean squared error (NRMSE), structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR). Code, model and a standalone executable file are provided on github. The model achieved a median NRMSE, PSNR and SSIM of 0.5, 17.4, and 0.5, respectively. Images were found interchangeable with an intraclass correlation coefficient >0.95 for all findings. Mean voxel intensity was equal between synthetic and conventional images. Four types of artifacts were identified: geometrical distortion (86/163 cases), object insertion/omission (11/163 cases), a wrap-around-like (26/163 cases) and an incomplete fat-suppression artifact (120/163 cases), which had a median 0 impact (no impact) on the diagnosis. In conclusion, the developed open-source GAN model creates synthetic T2*W images of the knee of high diagnostic value and quality. The identified artifacts had no or minor effect on the diagnostic value of the images.

Diagnosis of carpal tunnel syndrome using deep learning with comparative guidance.

Sim J, Lee S, Kim S, Jeong SH, Yoon J, Baek S

pubmed logopapersJun 1 2025
This study aims to develop a deep learning model for a robust diagnosis of Carpal Tunnel Syndrome (CTS) based on comparative classification leveraging the ultrasound images of the thenar and hypothenar muscles. We recruited 152 participants, both patients with varying severities of CTS and healthy individuals. The enrolled patients underwent ultrasonography, which provided ultrasound image data of the thenar and hypothenar muscles from the median and ulnar nerves. These images were used to train a deep learning model. We compared the performance of our model with previous comparative methods using echo intensity ratio or machine learning, and non-comparative methods based on deep learning. During the training process, comparative guidance based on cosine similarity was used so that the model learns to automatically identify the abnormal differences in echotexture between the ultrasound images of the thenar and hypothenar muscles. The proposed deep learning model with comparative guidance showed the highest performance. The comparison of Receiver operating characteristic (ROC) curves between models demonstrated that the Comparative guidance was effective in autonomously identifying complex features within the CTS dataset. The proposed deep learning model with comparative guidance was shown to be effective in automatically identifying important features for CTS diagnosis from the ultrasound images. The proposed comparative approach was found to be robust to the traditional problems in ultrasound image analysis such as different cut-off values and anatomical variation of patients. Proposed deep learning methodology facilitates accurate and efficient diagnosis of CTS from ultrasound images.

Expanded AI learning: AI as a Tool for Human Learning.

Faghani S, Tiegs-Heiden CA, Moassefi M, Powell GM, Ringler MD, Erickson BJ, Rhodes NG

pubmed logopapersJun 1 2025
To demonstrate that a deep learning (DL) model can be employed as a teaching tool to improve radiologists' ability to perform a subsequent imaging task without additional artificial intelligence (AI) assistance at time of image interpretation. Three human readers were tasked to categorize 50 frontal knee radiographs by male and female sex before and after reviewing data derived from our DL model. The model's high accuracy in performing this task was revealed to the human subjects, who were also supplied the DL model's resultant occlusion interpretation maps ("heat maps") to serve as a teaching tool for study before final testing. Two weeks later, the three human readers performed the same task with a new set of 50 radiographs. The average accuracy of the three human readers was initially 0.59 (95%CI: 0.59-0.65), not statistically different than guessing given our sample skew. The DL model categorized sex with 0.96 accuracy. After study of AI-derived "heat maps" and associated radiographs, the average accuracy of the human readers, without the direct help of AI, on the new set of radiographs increased to 0.80 (95%CI: 0.73-0.86), a significant improvement (p=0.0270). AI-derived data can be used as a teaching tool to improve radiologists' own ability to perform an imaging task. This is an idea that we have not before seen advanced in the radiology literature. AI can be used as a teaching tool to improve the intrinsic accuracy of radiologists, even without the concurrent use of AI.

LiDSCUNet++: A lightweight depth separable convolutional UNet++ for vertebral column segmentation and spondylosis detection.

Agrawal KK, Kumar G

pubmed logopapersMay 31 2025
Accurate computer-aided diagnosis systems rely on precise segmentation of the vertebral column to assist physicians in diagnosing various disorders. However, segmenting spinal disks and bones becomes challenging in the presence of abnormalities and complex anatomical structures. While Deep Convolutional Neural Networks (DCNNs) achieve remarkable results in medical image segmentation, their performance is limited by data insufficiency and the high computational complexity of existing solutions. This paper introduces LiDSCUNet++, a lightweight deep learning framework based on depthwise-separable and pointwise convolutions integrated with UNet++ for vertebral column segmentation. The model segments vertebral anomalies from dog radiographs, and the results are further processed by YOLOv8 for automated detection of Spondylosis Deformans. LiDSCUNet++ delivers comparable segmentation performance while significantly reducing trainable parameters, memory usage, energy consumption, and computational time, making it an efficient and practical solution for medical image analysis.

A Study on Predicting the Efficacy of Posterior Lumbar Interbody Fusion Surgery Using a Deep Learning Radiomics Model.

Fang L, Pan Y, Zheng H, Li F, Zhang W, Liu J, Zhou Q

pubmed logopapersMay 30 2025
This study seeks to develop a combined model integrating clinical data, radiomics, and deep learning (DL) for predicting the efficacy of posterior lumbar interbody fusion (PLIF) surgery. A retrospective review was conducted on 461 patients who underwent PLIF for degenerative lumbar diseases. These patients were partitioned into a training set (n=368) and a test set (n=93) in an 8:2 ratio. Clinical models, radiomics models, and DL models were constructed based on logistic regression and random forest, respectively. A combined model was established by integrating these three models. All radiomics and DL features were extracted from sagittal T2-weighted images using 3D slicer software. The least absolute shrinkage and selection operator method selected the optimal radiomics and DL features to build the models. In addition to analyzing the original region of interest (ROI), we also conducted different degrees of mask expansion on the ROI to determine the optimal ROI. The performance of the model was evaluated by using the receiver operating characteristic curve (ROC) and the area under the ROC curve. The differences in AUC were compared by DeLong test. Among the clinical characteristics, patient age, body weight, and preoperative intervertebral distance at the surgical segment are risk factors affecting the fusion outcome. The radiomics model based on MRI with expanded 10 mm mask showed excellent performance (training set AUC=0.814, 95% CI: (0.761-0.866); test set AUC=0.749, 95% CI: [0.631-0.866]). Among all single models, the DL model had the best diagnostic prediction performance, with AUC values of (0.995, 95% CI: [0.991-0.999]) for the training set and (0.803, 95% CI: [0.705-0.902]) for the test set. Compared to all the models, the combined model of clinical features, radiomics features, and DL features had the best diagnostic prediction performance, with AUC values of (0.993, 95% CI: [0.987-0.999]) for the training set and (0.866, 95% CI: [0.778-0.955]) for the test set. The proposed clinical feature-deep learning radiomics model can effectively predict the postoperative efficacy of patients undergoing PLIF surgery and has good clinical applicability.

End-to-end 2D/3D registration from pre-operative MRI to intra-operative fluoroscopy for orthopedic procedures.

Ku PC, Liu M, Grupp R, Harris A, Oni JK, Mears SC, Martin-Gomez A, Armand M

pubmed logopapersMay 30 2025
Soft tissue pathologies and bone defects are not easily visible in intra-operative fluoroscopic images; therefore, we develop an end-to-end MRI-to-fluoroscopic image registration framework, aiming to enhance intra-operative visualization for surgeons during orthopedic procedures. The proposed framework utilizes deep learning to segment MRI scans and generate synthetic CT (sCT) volumes. These sCT volumes are then used to produce digitally reconstructed radiographs (DRRs), enabling 2D/3D registration with intra-operative fluoroscopic images. The framework's performance was validated through simulation and cadaver studies for core decompression (CD) surgery, focusing on the registration accuracy of femur and pelvic regions. The framework achieved a mean translational registration accuracy of 2.4 ± 1.0 mm and rotational accuracy of 1.6 ± <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0</mn> <mo>.</mo> <msup><mn>8</mn> <mo>∘</mo></msup> </mrow> </math> for the femoral region in cadaver studies. The method successfully enabled intra-operative visualization of necrotic lesions that were not visible on conventional fluoroscopic images, marking a significant advancement in image guidance for femur and pelvic surgeries. The MRI-to-fluoroscopic registration framework offers a novel approach to image guidance in orthopedic surgeries, exclusively using MRI without the need for CT scans. This approach enhances the visualization of soft tissues and bone defects, reduces radiation exposure, and provides a safer, more effective alternative for intra-operative surgical guidance.

HVAngleEst: A Dataset for End-to-end Automated Hallux Valgus Angle Measurement from X-Ray Images.

Wang Q, Ji D, Wang J, Liu L, Yang X, Zhang Y, Liang J, Liu P, Zhao H

pubmed logopapersMay 30 2025
Accurate measurement of hallux valgus angle (HVA) and intermetatarsal angle (IMA) is essential for diagnosing hallux valgus and determining appropriate treatment strategies. Traditional manual measurement methods, while standardized, are time-consuming, labor-intensive, and subject to evaluator bias. Recent advancements in deep learning have been applied to hallux valgus angle estimation, but the development of effective algorithms requires large, well-annotated datasets. Existing X-ray datasets are typically limited to cropped foot regions images, and only one dataset containing very few samples is publicly available. To address these challenges, we introduce HVAngleEst, the first large-scale, open-access dataset specifically designed for hallux valgus angle estimation. HVAngleEst comprises 1,382 X-ray images from 1,150 patients and includes comprehensive annotations, such as foot localization, hallux valgus angles, and line segments for each phalanx. This dataset enables fully automated, end-to-end hallux valgus angle estimation, reducing manual labor and eliminating evaluator bias.

Three-dimensional automated segmentation of adolescent idiopathic scoliosis on computed tomography driven by deep learning: A retrospective study.

Ji Y, Mei X, Tan R, Zhang W, Ma Y, Peng Y, Zhang Y

pubmed logopapersMay 30 2025
Accurate vertebrae segmentation is crucial for modern surgical technologies, and deep learning networks provide valuable tools for this task. This study explores the application of advanced deep learning-based methods for segmenting vertebrae in computed tomography (CT) images of adolescent idiopathic scoliosis (AIS) patients. In this study, we collected a dataset of 31 samples from AIS patients, covering a wide range of spinal regions from cervical to lumbar vertebrae. High-resolution CT images were obtained for each sample, forming the basis of our segmentation analysis. We utilized 2 popular neural networks, U-Net and Attention U-Net, to segment the vertebrae in these CT images. Segmentation performance was rigorously evaluated using 2 key metrics: the Dice Coefficient Score to measure overlap between segmented and ground truth regions, and the Hausdorff distance (HD) to assess boundary dissimilarity. Both networks performed well, with U-Net achieving an average Dice coefficient of 92.2 ± 2.4% and an HD of 9.80 ± 1.34 mm. Attention U-Net showed similar results, with a Dice coefficient of 92.3 ± 2.9% and an HD of 8.67 ± 3.38 mm. When applied to the challenging anatomy of AIS, our findings align with literature results from advanced 3D U-Nets on healthy spines. Although no significant overall difference was observed between the 2 networks (P > .05), Attention U-Net exhibited an improved Dice coefficient (91.5 ± 0.0% vs 88.8 ± 0.1%, P = .151) and a significantly better HD (9.04 ± 4.51 vs. 13.60 ± 2.26 mm, P = .027) in critical scoliosis sites (mid-thoracic region), suggesting enhanced suitability for complex anatomy. Our study indicates that U-Net neural networks are feasible and effective for automated vertebrae segmentation in AIS patients using clinical 3D CT images. Attention U-Net demonstrated improved performance in thoracic levels, which are primary sites of scoliosis and may be more suitable for challenging anatomical regions.

Research on multi-algorithm and explainable AI techniques for predictive modeling of acute spinal cord injury using multimodal data.

Tai J, Wang L, Xie Y, Li Y, Fu H, Ma X, Li H, Li X, Yan Z, Liu J

pubmed logopapersMay 29 2025
Machine learning technology has been extensively applied in the medical field, particularly in the context of disease prediction and patient rehabilitation assessment. Acute spinal cord injury (ASCI) is a sudden trauma that frequently results in severe neurological deficits and a significant decline in quality of life. Early prediction of neurological recovery is crucial for the personalized treatment planning. While extensively explored in other medical fields, this study is the first to apply multiple machine learning methods and Shapley Additive Explanations (SHAP) analysis specifically to ASCI for predicting neurological recovery. A total of 387 ASCI patients were included, with clinical, imaging, and laboratory data collected. Key features were selected using univariate analysis, Lasso regression, and other feature selection techniques, integrating clinical, radiomics, and laboratory data. A range of machine learning models, including XGBoost, Logistic Regression, KNN, SVM, Decision Tree, Random Forest, LightGBM, ExtraTrees, Gradient Boosting, and Gaussian Naive Bayes, were evaluated, with Gaussian Naive Bayes exhibiting the best performance. Radiomics features extracted from T2-weighted fat-suppressed MRI scans, such as original_glszm_SizeZoneNonUniformity and wavelet-HLL_glcm_SumEntropy, significantly enhanced predictive accuracy. SHAP analysis identified critical clinical features, including IMLL, INR, BMI, Cys C, and RDW-CV, in the predictive model. The model was validated and demonstrated excellent performance across multiple metrics. The clinical utility and interpretability of the model were further enhanced through the application of patient clustering and nomogram analysis. This model has the potential to serve as a reliable tool for clinicians in the formulation of personalized treatment plans and prognosis assessment.

An AI system for continuous knee osteoarthritis severity grading: An anomaly detection inspired approach with few labels.

Belton N, Lawlor A, Curran KM

pubmed logopapersMay 28 2025
The diagnostic accuracy and subjectivity of existing Knee Osteoarthritis (OA) ordinal grading systems has been a subject of on-going debate and concern. Existing automated solutions are trained to emulate these imperfect systems, whilst also being reliant on large annotated databases for fully-supervised training. This work proposes a three stage approach for automated continuous grading of knee OA that is built upon the principles of Anomaly Detection (AD); learning a robust representation of healthy knee X-rays and grading disease severity based on its distance to the centre of normality. In the first stage, SS-FewSOME is proposed, a self-supervised AD technique that learns the 'normal' representation, requiring only examples of healthy subjects and <3% of the labels that existing methods require. In the second stage, this model is used to pseudo label a subset of unlabelled data as 'normal' or 'anomalous', followed by denoising of pseudo labels with CLIP. The final stage involves retraining on labelled and pseudo labelled data using the proposed Dual Centre Representation Learning (DCRL) which learns the centres of two representation spaces; normal and anomalous. Disease severity is then graded based on the distance to the learned centres. The proposed methodology outperforms existing techniques by margins of up to 24% in terms of OA detection and the disease severity scores correlate with the Kellgren-Lawrence grading system at the same level as human expert performance. Code available at https://github.com/niamhbelton/SS-FewSOME_Disease_Severity_Knee_Osteoarthritis.
Page 35 of 42420 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.