Sort by:
Page 109 of 6426411 results

Coupé P, Mansencal B, Morandat F, Morell-Ortega S, Villain N, Manjón JV, Planche V

pubmed logopapersSep 30 2025
Quantification of amyloid plaques (A), neurofibrillary tangles (T<sub>2</sub>), and neurodegeneration (N) using PET and MRI is critical for Alzheimer's disease (AD) diagnosis and prognosis. Existing pipelines face limitations regarding processing time, tracer variability handling, and multimodal integration. We developed petBrain, a novel end-to-end processing pipeline for amyloid-PET, tau-PET, and structural MRI. It leverages deep learning-based segmentation, standardized biomarker quantification (Centiloid, CenTauR, HAVAs), and simultaneous estimation of A, T<sub>2</sub>, and N biomarkers. It is implemented in a web-based format, requiring no local computational infrastructure and software usage knowledge. petBrain provides reliable, rapid quantification with results comparable to existing pipelines for A and T<sub>2</sub>, showing strong concordance with data processed in ADNI databases. The staging and quantification of A/T<sub>2</sub>/N by petBrain demonstrated good agreements with CSF/plasma biomarkers, clinical status and cognitive performance. petBrain represents a powerful open platform for standardized AD biomarker analysis, facilitating clinical research applications.

Yang C, Jia Z, Gao W, Xu C, Zhang L, Li J

pubmed logopapersSep 30 2025
Patient satisfaction after one year of distal radius fracture fixation is influenced by various aspects such as the surgical approach, the patient's physical functioning, and psychological factors. Hence, a multimodal machine learning prediction model combining traditional rating scales and postoperative X-ray images of patients was developed to predict patient satisfaction one year after surgery for personalized clinical treatment. In this study, we reviewed 385 patients who underwent internal fixation with a palmar plate or external fixation bracket fixation in 2018-2020. After one year of postoperative follow-up, 169 patients completed the patient wrist evaluation (PRWE), EuroQol5D (EQ-5D), and forgotten joint score-12 (FJS-12) questionnaires and were subjected to X-ray capture. The region of interest (ROI) of postoperative X-rays was outlined using 3D Slicer, and the training and test sets were divided based on the satisfaction of the patients. Python was used to extract 848 image features, and random forest embedding was used to reduce feature dimensionality. Also, a machine learning model combining the patient's functional rating scale with the downscaled X-ray-related image features was built, followed by hyperparameter debugging using the grid search method during the modeling process. The stability of the Radiomics and Integrated models was first verified using the five-fold cross-validation method, and then receiver operating characteristic curves, calibration curves, and decision curve analysis were used to evaluate the performance of the model on the training and test sets. The feature dimensionality reduction yielded 16 imaging features. The accuracy of the two models was 0.831, 0.784 and 0.966, 0.804 on the training and test sets, respectively. The area under the curve (AUC) values for the Radiomics and Integrated model were 0.937, 0.673 and 0.997, 0.823 for the training and test sets, respectively. The calibration curves and decision curve analysis (DCA) of the Integrated model for the training and test sets had a more accurate prediction probability and clinical significance than the Radiomics model. A multimodal machine learning predictive model combining imaging and patient functional rating scales demonstrated optimal predictive performance for one-year postoperative satisfaction in patients with radial fractures, providing a basis for personalized postoperative patient management.

Huang Y, Yuan X, Xu L, Jian J, Gong C, Zhang Y, Zheng W

pubmed logopapersSep 30 2025
The precise contouring of gross tumor volume lymph nodes (GTVnd) is an essential step in clinical target volume delineation. This study aims to propose and evaluate a deep learning model for segmenting GTVnd specifically in lung cancer, representing one of the pioneering investigations into automated segmentation of GTVnd specifically for lung cancer. Ninety computed tomography (CT) scans of patients with stage Ш-Ⅳ small cell lung cancer (SCLC) were collected, of which 75 patients were assembled into a training dataset and 15 were used in a testing dataset. A new segmentation model was constructed to enable the automatic and accurate delineation of the GTVnd in lung cancer. This model integrates a contextual cue enhancement module and an edge-guided feature enhancement decoder. The contextual cues enhancement module was used to enforce the consistency of the contextual cues encoded in the deepest feature, and the edge-guided feature enhancement decoder was used to obtain edge-aware and edge-preserving segmentation predictions. The model was quantitatively evaluated using the three-dimensional Dice Similarity Coefficient (3D DSC) and the 95th Hausdorff Distance (95HD). Additionally, comparative analysis was conducted between predicted treatment plans derived from auto-contouring GTVnd and established clinical plans. The ECENet achieved a mean 3D DSC of 0.72 ± 0.09 and a 95HD of 6.39 ± 4.59 mm, showing significant improvement compared to UNet, with a DSC of 0.46 ± 0.19 and a 95HD of 12.24 ± 13.36 mm, and nnUNet, with a DSC of 0.52 ± 0.18 and a 95HD of 9.92 ± 6.49 mm. Its performance was intermediate, falling between mid-level physicians, with a DSC of 0.81 ± 0.06, and junior physicians, with a DSC of 0.68 ± 0.10. And the clinical and predicted treatment plans were further compared. The dosimetric analysis demonstrated excellent agreement between predicted and clinical plans, with average relative deviation of < 0.17% for PTV D2/D50/D98, < 3.5% for lung V30/V20/V10/V5/Dmean, and < 6.1% for heart V40/V30/Dmean. Furthermore, the TCP (66.99% ± 0.55 vs. 66.88% ± 0.45) and NTCP (3.13% ± 1.33 vs. 3.25% ± 1.42) analyses revealed strong concordance between predicted and clinical outcomes, confirming the clinical applicability of the proposed method. The proposed model could achieve the automatic delineation of the GTVnd in the thoracic region of lung cancer and showed certain advantages, making it a potential choice for the automatic delineation of the GTVnd in lung cancer, particularly for young radiation oncologists.

Cui C, Cao J, Li Y, Jia B, Ma N, Li X, Liang M, Hou M, Zhang Y, Wang H, Wu Z

pubmed logopapersSep 30 2025
This study aimed to evaluate diffuse large B-cell lymphoma (DLBCL) patients who have refractory/relapsed disease and characterize the heterogeneity of DLBCL using patient-level radiomics analysis based on <sup>18</sup>F-FDG PET/CT. A total of 132 patients diagnosed with DLBCL who underwent <sup>18</sup>F-FDG PET/CT before receiving treatment were selected for the final study. Patient-level volumes of interests (VOI) were extracted from PET/CT images, and 328 radiomics features were extracted subsequently. 8 radiomics features were selected using the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm to calculate the radiomics score (rad-score). Additionally, a total of 64 potential ML classifiers were generated based on 8 distinct supervised learning algorithms. The combined model that integrates rad-scores, clinical features and standard PET parameters demonstrates excellent performance; Specifically, ML models based on Naive Bayes have the greatest predicted values (AUC = 0.73). The patient-level radiomics features were subjected to unsupervised non-negative matrix factorization (NMF) clustering analysis to identify 3 radiomics subtypes. Cluster 1 exhibited a substantially higher prevalence of refractory/relapsed DLBCL compared to Clusters 2 and 3 (P < 0.05). Moreover, Cluster 1 showed a significantly higher frequency of advanced Ann Arbor stage, high international prognostic index, and bulk disease (all P < 0.05). In conclusion, Radiomics scores and radiomics subtypes derived from patient-level data offer significant predictive value and phenotypic information for patients with refractory/relapsed DLBCL.

Gurunathan P, Srinivasan PS, S R

pubmed logopapersSep 30 2025
The brain tumour (BT) is an aggressive disease among others, which leads to a very short life expectancy. Therefore, early and prompt treatment is the main stage in enhancing patients' quality of life. Biomedical imaging permits the non-invasive evaluation of diseases, depending upon visual assessments that lead to better medical outcome expectations and therapeutic planning. Numerous image techniques like computed tomography (CT), magnetic resonance imaging (MRI), etc., are employed for evaluating cancer in the brain. The detection, segmentation and extraction of diseased tumour regions from biomedical images are a primary concern, but are tiresome and time-consuming tasks done by clinical specialists, and their outcome depends on their experience only. Therefore, the use of computer-aided technologies is essential to overcoming these limitations. Recently, artificial intelligence (AI) models have been very effective in enhancing performance and improving the method of medical image diagnosis. This paper proposes an Enhanced Brain Tumour Segmentation through Biomedical Imaging and Feature Model Fusion with Bonobo Optimiser (EBTS-BIFMFBO) model. The main intention of the EBTS-BIFMFBO model relies on enhancing the segmentation and classification model of BTs utilizing advanced models. Initially, the EBTS-BIFMFBO technique follows bilateral filter (BF)-based noise elimination and CLAHE-based contrast enhancement. Furthermore, the proposed EBTS-BIFMFBO model involves a segmentation process by the DeepLabV3 + model to identify tumour regions for accurate diagnosis. Moreover, the fusion models such as InceptionResNetV2, MobileNet, and DenseNet201 are employed for the feature extraction. Additionally, the convolutional sparse autoencoder (CSAE) method is implemented for the classification process of BT. Finally, the hyper-parameter selection of CSAE is performed by the bonobo optimizer (BO) method. A vast experiment is conducted to highlight the performance of the EBTS-BIFMFBO approach under the Figshare BT dataset. The comparison results of the EBTS-BIFMFBO approach portrayed a superior accuracy value of 99.16% over existing models.

Perron J, Krak S, Booth S, Zhang D, Ko JH

pubmed logopapersSep 30 2025
Many Parkinson's disease (PD) patients manifest complications related to treatment called levodopa-induced dyskinesia (LID). Preventing the onset of LID is crucial to the management of PD, but the reasons why some patients develop LID are unclear. The ability to prognosticate predisposition to LID would be valuable for the investigation of mitigation strategies. Thirty rats received 6-hydroxydopamine to induce Parkinsonism-like behaviors before treatment with levodopa (2 mg/kg) daily for 22 days. Fourteen developed LID-like behaviors. Fluorodeoxyglucose PET, T<sub>2</sub>-weighted MRI and cerebral perfusion imaging were collected before treatment. Support vector machines were trained to classify prospective LID vs. non-LID animals from treatment-naïve baseline imaging. Volumetric perfusion imaging performed best overall with 86.16% area-under-curve, 86.67% accuracy, 92.86% sensitivity, and 81.25% specificity for classifying animals with LID vs. non-LID in leave-one-out cross-validation. We have demonstrated proof-of-concept for imaging-based classification of susceptibility to LID of a Parkinsonian rat model using perfusion-based imaging and a machine learning model.

Stambollxhiu E, Freißmuth L, Moser LJ, Adolf R, Will A, Hendrich E, Bressem K, Hadamitzky M

pubmed logopapersSep 30 2025
This study aims to develop and assess an optimized three-dimensional convolutional neural network model (3D CNN) for predicting major cardiac events from coronary computed tomography angiography (CCTA) images in patients with suspected coronary artery disease. Patients undergoing CCTA with suspected coronary artery disease (CAD) were retrospectively included in this single-center study and split into training and test sets. The endpoint was defined as a composite of all-cause death, myocardial infarction, unstable angina, or revascularization events. Cardiovascular risk assessment relied on Morise score and the extent of CAD (eoCAD). An optimized 3D CNN mimicking the DenseNet architecture was trained on CCTA images to predict the clinical endpoints. The data was unannotated for presence of coronary plaque. A total of 5562 patients were assigned to the training group (66.4% male, median age 61.1 ± 11.2); 714 to the test group (69.3% male, 61.5 ± 11.4). Over a 7.2-year follow-up, the composite endpoint occurred in 760 training group and 83 test group patients. In the test cohort, the CNN achieved an AUC of 0.872 ± 0.020 for predicting the composite endpoint. The predictive performance improved in a stepwise manner: from an AUC of 0.652 ± 0.031 while using Morise score alone to 0.901 ± 0.016 when adding eoCAD and finally to 0.920 ± 0.015 when combining Morise score, eoCAD, and CNN (p < 0.001 and p = 0.012, respectively). Deep learning-based analysis of CCTA images improves prognostic risk stratification when combined with clinical and imaging risk factors in patients with suspected CAD.

Yao D, Yan C, Du W, Zhang J, Wang Z, Zhang S, Yang M, Dai S

pubmed logopapersSep 30 2025
Cardiac motion artifacts frequently degrade the quality and interpretability of coronary computed tomography angiography (CCTA) images, making it difficult for radiologists to identify and evaluate the details of the coronary vessels accurately. In this paper, a deep learning-based approach for coronary artery motion compensation, namely a temporal-weighted motion correction network (TW-MoCoNet), was proposed. Firstly, the motion data required for TW-MoCoNet training were generated using a motion artifact simulation method based on the original no-artifact CCTA images. Secondly, TW-MoCoNet, consisting of a temporal weighting correction module and a differentiable spatial transformer module, was trained using these generated paired images. Finally, the proposed method was evaluated on 67 clinical data with objective metrics including peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), fold-overlap ratio (FOR), low-intensity region score (LIRS), and motion artifact score (MAS). Additionally, subjective image quality was evaluated using a 4-point Likert scale to assess visual improvements. The experimental results demonstrated a substantial improvement in both the objective and subjective evaluations of image quality after motion correction was applied. The proportion of the segments with moderate artifacts, scored 2 points, has a notable decrease of 80.2% (from 26.37 to 5.22%), and the proportion of artifact-free segments (scored 4 points) has reached 50.0%, which is of great clinical significance. In conclusion, the deep learning-based motion correction method proposed in this paper can effectively reduce motion artifacts, enhance image clarity, and improve clinical interpretability, thus effectively assisting doctors in accurately identifying and evaluating the details of coronary vessels.

Bennett RD, Barrett T, Sushentsev N, Sanmugalingam N, Lee KL, Gnanapragasam VJ, Tse ZTH

pubmed logopapersSep 30 2025
Proposed methods for prostate cancer screening are currently prohibitively expensive (due to the high costs of imaging equipment such as magnetic resonance imaging and traditional ultrasound systems), inadequate in their detection rates, require highly trained specialists, and/or are invasive, resulting in patient discomfort. These limitations make population-wide screening for prostate cancer challenging. Machine learning techniques applied to abdominal ultrasound scanning may help alleviate some of these disadvantages. Abdominal ultrasound scans are comparatively low cost and exhibit minimal patient discomfort, and machine learning can be applied to mitigate against the high operator-dependent variability of ultrasound scanning. In this study, a state-of-the-art machine learning model was compared to an expert radiologist and trainee radiologist registrars of varying experience when estimating prostate volume from abdominal ultrasound images, a crucial step in detecting prostate cancer using prostate-specific antigen density. The machine learning model calculated prostatic volume by marking out dimensions of the prolate ellipsoid formula from two orthogonal images of the prostate acquired with abdominal ultrasound scans (which could be conducted by operators with minimal experience in a primary care setting). While both the algorithm and the registrars showed high correlation with the expert ([Formula: see text]) it was found that the model outperformed the trainees in both accuracy (lowest average volume error of [Formula: see text]) and consistency (lowest IQR of [Formula: see text] and lowest average volume standard deviation of [Formula: see text]). The results are promising for the future development of an automated prostate cancer screening workflow using machine learning and abdominal ultrasound scans.

Almattar W, Anwar S, Al-Azani S, Khan FA

pubmed logopapersSep 30 2025
Diabetic retinopathy is a leading cause of vision loss, necessitating early, accurate detection. Automated deep learning models show promise but struggle with the complexity of retinal images and limited labeled data. Due to domain differences, traditional transfer learning from datasets like ImageNet often fails in medical imaging. Self-supervised learning (SSL) offers a solution by enabling models to learn directly from medical data, but its success depends on the backbone architecture. Convolutional Neural Networks (CNNs) focus on local features, which can be limiting. To address this, we propose the Multi-scale Self-Supervised Learning (MsSSL) model, combining Vision Transformers (ViTs) for global context and CNNs with a Feature Pyramid Network (FPN) for multi-scale feature extraction. These features are refined through a Deep Learner module, improving spatial resolution and capturing high-level and fine-grained information. The MsSSL model significantly enhances DR grading, outperforming traditional methods, and underscores the value of domain-specific pretraining and advanced model integration in medical imaging.
Page 109 of 6426411 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.