Sort by:
Page 105 of 6386373 results

Panwar P, Chaurasia S, Gangrade J, Bilandi A

pubmed logopapersSep 30 2025
Knee Osteoarthritis (K-OA) is characterized as a progressive joint condition with global prevalence, exhibiting deterioration over time and impacting a significant portion of the population. It happens because joints wear out slowly. The main reason for osteoarthritis is the wearing away of the cushion in the joints, which makes the bones rub together. This causes feelings of stiffness, unease, and difficulty moving. Persons with osteoarthritis find it hard to do simple things like walking, standing, or going up stairs. Besides that, it can also make people feel sad or worried because of the ongoing pain and trouble it causes. Knee osteoarthritis exerts a sustained impact on both the economy and society. Typically, radiologists assess knee health through MRI or X-ray images, assigning KL-grades. MRI excels in visualizing soft tissues like cartilage, menisci, and ligaments, directly revealing cartilage degeneration and joint inflammation crucial for osteoarthritis (OA) diagnosis. In contrast, X-rays primarily show bone, only inferring cartilage loss through joint space narrowing-a late indicator of OA. This makes MRI superior for detecting early changes and subtle lesions often missed by X-rays. However, manual diagnosis of Knee osteoarthritis is laborious and time-consuming. In response, deep learning methodologies such as vision transformer (ViT) has been implemented to enhance efficiency and streamline workflows in clinical settings. This research leverages ViT for Knee Osteoarthritis KL grading, achieving an accuracy of 88%. It illustrates that employing a simple transfer learning technique with this model yields superior performance compared to more intricate architectures.

Yoon YJ, Seo S, Lee S, Lim H, Choo K, Kim D, Han H, So M, Kang H, Kang S, Kim D, Lee YG, Shin D, Jeon TJ, Yun M

pubmed logopapersSep 30 2025
Amyloid PET/CT is essential for quantifying amyloid-beta (Aβ) deposition in Alzheimer's disease (AD), with the Centiloid (CL) scale standardizing measurements across imaging centers. However, MRI-based CL pipelines face challenges: high cost, contraindications, and patient burden. To address these challenges, we developed a deep learning-based CT parcellation pipeline calibrated to the standard CL scale using CT images from PET/CT scans and evaluated its performance relative to standard pipelines. A total of 306 participants (23 young controls [YCs] and 283 patients) underwent 18 F-florbetaben (FBB) PET/CT and MRI. Based on visual assessment, 207 patients were classified as Aβ-positive and 76 as Aβ-negative. PET images were processed using the CT parcellation pipeline and compared to FreeSurfer (FS) and standard pipelines. Agreement was assessed via regression analyses. Effect size, variance, and ROC analyses were used to compare pipelines and determine the optimal CL threshold relative to visual Aβ assessment. The CT parcellation showed high concordance with the FS and provided reliable CL quantification (R² = 0.99). Both pipelines demonstrated similar variance in YCs and effect sizes between YCs and ADCI. ROC analyses confirmed comparable accuracy and similar CL thresholds, supporting CT parcellation as a viable MRI-free alternative. Our findings indicate that the CT parcellation pipeline achieves a level of accuracy similar to FS in CL quantification, demonstrating its reliability as an MRI-free alternative. In PET/CT, CT and PET are acquired sequentially within the same session on a shared bed and headrest, which helps maintain consistent positioning and adequate spatial alignment, reducing registration errors and supporting more reliable and precise quantification.

Coupé P, Mansencal B, Morandat F, Morell-Ortega S, Villain N, Manjón JV, Planche V

pubmed logopapersSep 30 2025
Quantification of amyloid plaques (A), neurofibrillary tangles (T<sub>2</sub>), and neurodegeneration (N) using PET and MRI is critical for Alzheimer's disease (AD) diagnosis and prognosis. Existing pipelines face limitations regarding processing time, tracer variability handling, and multimodal integration. We developed petBrain, a novel end-to-end processing pipeline for amyloid-PET, tau-PET, and structural MRI. It leverages deep learning-based segmentation, standardized biomarker quantification (Centiloid, CenTauR, HAVAs), and simultaneous estimation of A, T<sub>2</sub>, and N biomarkers. It is implemented in a web-based format, requiring no local computational infrastructure and software usage knowledge. petBrain provides reliable, rapid quantification with results comparable to existing pipelines for A and T<sub>2</sub>, showing strong concordance with data processed in ADNI databases. The staging and quantification of A/T<sub>2</sub>/N by petBrain demonstrated good agreements with CSF/plasma biomarkers, clinical status and cognitive performance. petBrain represents a powerful open platform for standardized AD biomarker analysis, facilitating clinical research applications.

Yang C, Jia Z, Gao W, Xu C, Zhang L, Li J

pubmed logopapersSep 30 2025
Patient satisfaction after one year of distal radius fracture fixation is influenced by various aspects such as the surgical approach, the patient's physical functioning, and psychological factors. Hence, a multimodal machine learning prediction model combining traditional rating scales and postoperative X-ray images of patients was developed to predict patient satisfaction one year after surgery for personalized clinical treatment. In this study, we reviewed 385 patients who underwent internal fixation with a palmar plate or external fixation bracket fixation in 2018-2020. After one year of postoperative follow-up, 169 patients completed the patient wrist evaluation (PRWE), EuroQol5D (EQ-5D), and forgotten joint score-12 (FJS-12) questionnaires and were subjected to X-ray capture. The region of interest (ROI) of postoperative X-rays was outlined using 3D Slicer, and the training and test sets were divided based on the satisfaction of the patients. Python was used to extract 848 image features, and random forest embedding was used to reduce feature dimensionality. Also, a machine learning model combining the patient's functional rating scale with the downscaled X-ray-related image features was built, followed by hyperparameter debugging using the grid search method during the modeling process. The stability of the Radiomics and Integrated models was first verified using the five-fold cross-validation method, and then receiver operating characteristic curves, calibration curves, and decision curve analysis were used to evaluate the performance of the model on the training and test sets. The feature dimensionality reduction yielded 16 imaging features. The accuracy of the two models was 0.831, 0.784 and 0.966, 0.804 on the training and test sets, respectively. The area under the curve (AUC) values for the Radiomics and Integrated model were 0.937, 0.673 and 0.997, 0.823 for the training and test sets, respectively. The calibration curves and decision curve analysis (DCA) of the Integrated model for the training and test sets had a more accurate prediction probability and clinical significance than the Radiomics model. A multimodal machine learning predictive model combining imaging and patient functional rating scales demonstrated optimal predictive performance for one-year postoperative satisfaction in patients with radial fractures, providing a basis for personalized postoperative patient management.

Huang Y, Yuan X, Xu L, Jian J, Gong C, Zhang Y, Zheng W

pubmed logopapersSep 30 2025
The precise contouring of gross tumor volume lymph nodes (GTVnd) is an essential step in clinical target volume delineation. This study aims to propose and evaluate a deep learning model for segmenting GTVnd specifically in lung cancer, representing one of the pioneering investigations into automated segmentation of GTVnd specifically for lung cancer. Ninety computed tomography (CT) scans of patients with stage Ш-Ⅳ small cell lung cancer (SCLC) were collected, of which 75 patients were assembled into a training dataset and 15 were used in a testing dataset. A new segmentation model was constructed to enable the automatic and accurate delineation of the GTVnd in lung cancer. This model integrates a contextual cue enhancement module and an edge-guided feature enhancement decoder. The contextual cues enhancement module was used to enforce the consistency of the contextual cues encoded in the deepest feature, and the edge-guided feature enhancement decoder was used to obtain edge-aware and edge-preserving segmentation predictions. The model was quantitatively evaluated using the three-dimensional Dice Similarity Coefficient (3D DSC) and the 95th Hausdorff Distance (95HD). Additionally, comparative analysis was conducted between predicted treatment plans derived from auto-contouring GTVnd and established clinical plans. The ECENet achieved a mean 3D DSC of 0.72 ± 0.09 and a 95HD of 6.39 ± 4.59 mm, showing significant improvement compared to UNet, with a DSC of 0.46 ± 0.19 and a 95HD of 12.24 ± 13.36 mm, and nnUNet, with a DSC of 0.52 ± 0.18 and a 95HD of 9.92 ± 6.49 mm. Its performance was intermediate, falling between mid-level physicians, with a DSC of 0.81 ± 0.06, and junior physicians, with a DSC of 0.68 ± 0.10. And the clinical and predicted treatment plans were further compared. The dosimetric analysis demonstrated excellent agreement between predicted and clinical plans, with average relative deviation of < 0.17% for PTV D2/D50/D98, < 3.5% for lung V30/V20/V10/V5/Dmean, and < 6.1% for heart V40/V30/Dmean. Furthermore, the TCP (66.99% ± 0.55 vs. 66.88% ± 0.45) and NTCP (3.13% ± 1.33 vs. 3.25% ± 1.42) analyses revealed strong concordance between predicted and clinical outcomes, confirming the clinical applicability of the proposed method. The proposed model could achieve the automatic delineation of the GTVnd in the thoracic region of lung cancer and showed certain advantages, making it a potential choice for the automatic delineation of the GTVnd in lung cancer, particularly for young radiation oncologists.

Cui C, Cao J, Li Y, Jia B, Ma N, Li X, Liang M, Hou M, Zhang Y, Wang H, Wu Z

pubmed logopapersSep 30 2025
This study aimed to evaluate diffuse large B-cell lymphoma (DLBCL) patients who have refractory/relapsed disease and characterize the heterogeneity of DLBCL using patient-level radiomics analysis based on <sup>18</sup>F-FDG PET/CT. A total of 132 patients diagnosed with DLBCL who underwent <sup>18</sup>F-FDG PET/CT before receiving treatment were selected for the final study. Patient-level volumes of interests (VOI) were extracted from PET/CT images, and 328 radiomics features were extracted subsequently. 8 radiomics features were selected using the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm to calculate the radiomics score (rad-score). Additionally, a total of 64 potential ML classifiers were generated based on 8 distinct supervised learning algorithms. The combined model that integrates rad-scores, clinical features and standard PET parameters demonstrates excellent performance; Specifically, ML models based on Naive Bayes have the greatest predicted values (AUC = 0.73). The patient-level radiomics features were subjected to unsupervised non-negative matrix factorization (NMF) clustering analysis to identify 3 radiomics subtypes. Cluster 1 exhibited a substantially higher prevalence of refractory/relapsed DLBCL compared to Clusters 2 and 3 (P < 0.05). Moreover, Cluster 1 showed a significantly higher frequency of advanced Ann Arbor stage, high international prognostic index, and bulk disease (all P < 0.05). In conclusion, Radiomics scores and radiomics subtypes derived from patient-level data offer significant predictive value and phenotypic information for patients with refractory/relapsed DLBCL.

Gurunathan P, Srinivasan PS, S R

pubmed logopapersSep 30 2025
The brain tumour (BT) is an aggressive disease among others, which leads to a very short life expectancy. Therefore, early and prompt treatment is the main stage in enhancing patients' quality of life. Biomedical imaging permits the non-invasive evaluation of diseases, depending upon visual assessments that lead to better medical outcome expectations and therapeutic planning. Numerous image techniques like computed tomography (CT), magnetic resonance imaging (MRI), etc., are employed for evaluating cancer in the brain. The detection, segmentation and extraction of diseased tumour regions from biomedical images are a primary concern, but are tiresome and time-consuming tasks done by clinical specialists, and their outcome depends on their experience only. Therefore, the use of computer-aided technologies is essential to overcoming these limitations. Recently, artificial intelligence (AI) models have been very effective in enhancing performance and improving the method of medical image diagnosis. This paper proposes an Enhanced Brain Tumour Segmentation through Biomedical Imaging and Feature Model Fusion with Bonobo Optimiser (EBTS-BIFMFBO) model. The main intention of the EBTS-BIFMFBO model relies on enhancing the segmentation and classification model of BTs utilizing advanced models. Initially, the EBTS-BIFMFBO technique follows bilateral filter (BF)-based noise elimination and CLAHE-based contrast enhancement. Furthermore, the proposed EBTS-BIFMFBO model involves a segmentation process by the DeepLabV3 + model to identify tumour regions for accurate diagnosis. Moreover, the fusion models such as InceptionResNetV2, MobileNet, and DenseNet201 are employed for the feature extraction. Additionally, the convolutional sparse autoencoder (CSAE) method is implemented for the classification process of BT. Finally, the hyper-parameter selection of CSAE is performed by the bonobo optimizer (BO) method. A vast experiment is conducted to highlight the performance of the EBTS-BIFMFBO approach under the Figshare BT dataset. The comparison results of the EBTS-BIFMFBO approach portrayed a superior accuracy value of 99.16% over existing models.

Perron J, Krak S, Booth S, Zhang D, Ko JH

pubmed logopapersSep 30 2025
Many Parkinson's disease (PD) patients manifest complications related to treatment called levodopa-induced dyskinesia (LID). Preventing the onset of LID is crucial to the management of PD, but the reasons why some patients develop LID are unclear. The ability to prognosticate predisposition to LID would be valuable for the investigation of mitigation strategies. Thirty rats received 6-hydroxydopamine to induce Parkinsonism-like behaviors before treatment with levodopa (2 mg/kg) daily for 22 days. Fourteen developed LID-like behaviors. Fluorodeoxyglucose PET, T<sub>2</sub>-weighted MRI and cerebral perfusion imaging were collected before treatment. Support vector machines were trained to classify prospective LID vs. non-LID animals from treatment-naïve baseline imaging. Volumetric perfusion imaging performed best overall with 86.16% area-under-curve, 86.67% accuracy, 92.86% sensitivity, and 81.25% specificity for classifying animals with LID vs. non-LID in leave-one-out cross-validation. We have demonstrated proof-of-concept for imaging-based classification of susceptibility to LID of a Parkinsonian rat model using perfusion-based imaging and a machine learning model.

Stambollxhiu E, Freißmuth L, Moser LJ, Adolf R, Will A, Hendrich E, Bressem K, Hadamitzky M

pubmed logopapersSep 30 2025
This study aims to develop and assess an optimized three-dimensional convolutional neural network model (3D CNN) for predicting major cardiac events from coronary computed tomography angiography (CCTA) images in patients with suspected coronary artery disease. Patients undergoing CCTA with suspected coronary artery disease (CAD) were retrospectively included in this single-center study and split into training and test sets. The endpoint was defined as a composite of all-cause death, myocardial infarction, unstable angina, or revascularization events. Cardiovascular risk assessment relied on Morise score and the extent of CAD (eoCAD). An optimized 3D CNN mimicking the DenseNet architecture was trained on CCTA images to predict the clinical endpoints. The data was unannotated for presence of coronary plaque. A total of 5562 patients were assigned to the training group (66.4% male, median age 61.1 ± 11.2); 714 to the test group (69.3% male, 61.5 ± 11.4). Over a 7.2-year follow-up, the composite endpoint occurred in 760 training group and 83 test group patients. In the test cohort, the CNN achieved an AUC of 0.872 ± 0.020 for predicting the composite endpoint. The predictive performance improved in a stepwise manner: from an AUC of 0.652 ± 0.031 while using Morise score alone to 0.901 ± 0.016 when adding eoCAD and finally to 0.920 ± 0.015 when combining Morise score, eoCAD, and CNN (p < 0.001 and p = 0.012, respectively). Deep learning-based analysis of CCTA images improves prognostic risk stratification when combined with clinical and imaging risk factors in patients with suspected CAD.

Yao D, Yan C, Du W, Zhang J, Wang Z, Zhang S, Yang M, Dai S

pubmed logopapersSep 30 2025
Cardiac motion artifacts frequently degrade the quality and interpretability of coronary computed tomography angiography (CCTA) images, making it difficult for radiologists to identify and evaluate the details of the coronary vessels accurately. In this paper, a deep learning-based approach for coronary artery motion compensation, namely a temporal-weighted motion correction network (TW-MoCoNet), was proposed. Firstly, the motion data required for TW-MoCoNet training were generated using a motion artifact simulation method based on the original no-artifact CCTA images. Secondly, TW-MoCoNet, consisting of a temporal weighting correction module and a differentiable spatial transformer module, was trained using these generated paired images. Finally, the proposed method was evaluated on 67 clinical data with objective metrics including peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), fold-overlap ratio (FOR), low-intensity region score (LIRS), and motion artifact score (MAS). Additionally, subjective image quality was evaluated using a 4-point Likert scale to assess visual improvements. The experimental results demonstrated a substantial improvement in both the objective and subjective evaluations of image quality after motion correction was applied. The proportion of the segments with moderate artifacts, scored 2 points, has a notable decrease of 80.2% (from 26.37 to 5.22%), and the proportion of artifact-free segments (scored 4 points) has reached 50.0%, which is of great clinical significance. In conclusion, the deep learning-based motion correction method proposed in this paper can effectively reduce motion artifacts, enhance image clarity, and improve clinical interpretability, thus effectively assisting doctors in accurately identifying and evaluating the details of coronary vessels.
Page 105 of 6386373 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.