Sort by:
Page 485 of 7527514 results

Bizzozero S, Bassani T, Sconfienza LM, Messina C, Bonato M, Inzaghi C, Marmondi F, Cinque P, Banfi G, Borghi S

pubmed logopapersJul 7 2025
Aging alters musculoskeletal structure and function, affecting muscle mass, composition, and strength, increasing the risk of falls and loss of independence in older adults. This study assessed cross-sectional area (CSA) and fat infiltration (FI) of six thigh muscles through a validated deep learning model. Gender differences and correlations between fat, muscle parameters, and age were also analyzed. We retrospectively analyzed 141 participants (67 females, 74 males) aged 52-82 years. Participants underwent magnetic resonance imaging (MRI) scans of the right thigh and dual-energy x-ray absorptiometry to determine appendicular skeletal muscle mass index (ASMMI) and body fat percentage (FAT%). A deep learning-based application was developed to automate the segmentation of six thigh muscle groups. Deep learning model accuracy was evaluated using the "intersection over union" (IoU) metric, with average IoU values across muscle groups ranging from 0.84 to 0.99. Mean CSA was 10,766.9 mm² (females 8,892.6 mm², males 12,463.9 mm², p < 0.001). The mean FI value was 14.92% (females 17.42%, males 12.62%, p < 0.001). Males showed larger CSA and lower FI in all thigh muscles compared to females. Positive correlations were identified in females between the FI of posterior thigh muscle groups (biceps femoris, semimembranosus, and semitendinosus) and age (r or ρ = 0.35-0.48; p ≤ 0.004), while no significant correlations were observed between CSA, ASMMI, or FAT% and age. Deep learning accurately quantifies muscle CSA and FI, reducing analysis time and human error. Aging impacts on muscle composition and distribution and gender-specific assessments in older adults is needed. Efficient deep learning-based MRI image segmentation to assess the composition of six thigh muscle groups in over 50 individuals revealed gender differences in thigh muscle CSA and FI. These findings have potential clinical applications in assessing muscle quality, decline, and frailty. Deep learning model enhanced MRI segmentation, providing high assessment accuracy. Significant gender differences in cross-sectional area and fat infiltration across all thigh muscles were observed. In females, fat infiltration of the posterior thigh muscles was positively correlated with age.

Talbi M, Nasraoui B, Alfaidi A

pubmed logopapersJul 7 2025
The noise emergence in the digital image can occur throughout image acquisition, transmission, and processing steps. Consequently, eliminating the noise from the digital image is required before further processing. This study aims to denoise noisy images (including Magnetic Resonance Images (<b>MRIs</b>)) by employing our proposed image denoising approach. This proposed approach is based on the Stationary Wavelet Transform (<b>SWT 2-D</b>) and the <b>2 - D</b> Dual-Tree Discrete Wavelet Transform (<b>DWT</b>). The first step of this approach consists of applying the 2 - D Dual-Tree DWT to the noisy image to obtain noisy wavelet coefficients. The second step of this approach consists of denoising each of these coefficients by applying an SWT 2-D based denoising technique. The denoised image is finally obtained by applying the inverse of the 2-D Dual-Tree <b>DWT</b> to the denoised coefficients obtained in the second step. The proposed image denoising approach is evaluated by comparing it to four denoising techniques existing in literature. The latters are the image denoising technique based on thresholding in the <b>SWT-2D</b> domain, the image denoising technique based on deep neural network, the image denoising technique based on soft thresholding in the domain of 2-D Dual-Tree DWT, and Non-local Means Filter. The proposed denoising approach, and the other four techniques previously mentioned, are applied to a number of noisy grey scale images and noisy Magnetic Resonance Images (MRIs) and the obtained results are in terms of <b>PSNR</b> (Peak Signal to Noise Ratio), <b>SSIM</b> (Structural Similarity), <b>NMSE</b> (Normalized Mean Square Error) and Feature Similarity (<b>FSIM</b>). These results show that the proposed image denoising approach outperforms the other denoising techniques applied for our evaluation. In comparison with the four denoising techniques applied for our evaluation, the proposed approach permits to obtain highest values of <b>PSNR, SSIM</b> and <b>FSIM</b> and the lowest values of <b>NMSE</b>. Moreover, in cases where the noise level <b>σ = 10</b> or <b>σ = 20</b>, this approach permits the elimination of the noise from the noisy images and introduces slight distortions on the details of the original images. However, in case where <b>σ = 30</b> or <b>σ = 40</b>, this approach eliminates a great part of the noise and introduces some distortions on the original images. The performance of this approach is proven by comparing it to four image denoising techniques existing in literature. These techniques are the denoising technique based on thresholding in the SWT-2D domain, the image denoising technique based on a deep neural network, the image denoising technique based on soft thresholding in the domain of <b>2 - D</b> Dual-Tree <b>DWT</b> and the Non-local Means Filter. All these denoising techniques, including our approach, are applied to a number of noisy grey scale images and noisy <b>MRIs</b>, and the obtained results are in terms of <b>PSNR</b> (Peak Signal to Noise Ratio), <b>SSIM</b>(Structural Similarity), <b>NMSE</b> (Normalized Mean Square Error) and <b>FSIM</b> (Feature Similarity). These results show that this proposed approach outperforms the four denoising techniques applied for our evaluation.

Gu M, Zou W, Chen H, He R, Zhao X, Jia N, Liu W, Wang P

pubmed logopapersJul 7 2025
The purpose of this study is to mainly develop a predictive model based on clinicoradiological and radiomics features from preoperative gadobenate-enhanced (Gd-BOPTA) magnetic resonance imaging (MRI) using multilayer perceptron (MLP) deep learning to predict vessels encapsulating tumor clusters (VETC) in hepatocellular carcinoma (HCC) patients. A total of 230 patients with histopathologically confirmed HCC who underwent preoperative Gd-BOPTA MRI before hepatectomy were retrospectively enrolled from three hospitals (144, 54, and 32 in training, test, and validation set, respectively). Univariate and multivariate logistic regression analyses were used to determine independent clinicoradiological predictors significantly associated with VETC, which then constituted the clinicoradiological model. Regions of interest (ROIs) included four modes, intratumoral (Tumor), peritumoral area ≤ 2 mm (Peri2mm), intratumoral + peritumoral area ≤ 2 mm (Tumor + Peri2mm) and intratumoral integrated with peritumoral ≤ 2 mm as a whole (TumorPeri2mm). A total of 7322 radiomics features were extracted respectively for ROI(Tumor), ROI(Peri2mm), ROI(TumorPeri2mm) and 14644 radiomics features for ROI(Tumor + Peri2mm). Least absolute shrinkage and selection operator (LASSO) and univariate logistic regression analysis were used to select the important features. Seven different machine learning classifiers respectively combined the radiomics signatures selected from four ROIs to constitute different models, and compare the performance between them in three sets and then select the optimal combination to become the radiomics model we need. Then a radiomics score (rad-score) was generated, which combined significant clinicoradiological predictors to constituted the fusion model through multivariate logistic regression analysis. After comparing the performance of the three models using area under receiver operating characteristic curve (AUC), integrated discrimination index (IDI) and net reclassification index (NRI), choose the optimal predictive model for VETC prediction. Arterial peritumoral enhancement and peritumoral hypointensity on hepatobiliary phase (HBP) were independent risk factors for VETC, and constituted the Radiology model, without any clinical variables. Arterial peritumoral enhancement defined as the enhancement outside the tumor boundary in the late stage of arterial phase or early stage of portal phase, extensive contact with the tumor edge, which becomes isointense during the DP. MLP deep learning algorithm integrated radiomics features selected from ROI TumorPeri2mm was the best combination, which constituted the radiomics model (MLP model). A MLP score (MLP_score) was calculated then, which combining the two radiology features composed the fusion model (Radiology MLP model), with AUCs of 0.871, 0.894, 0.918 in the training, test and validation sets. Compared with the two models aforementioned, the Radiology MLP model demonstrated a 33.4%-131.3% improvement in NRI and a 9.3%-50% improvement in IDI, showing better discrimination, calibration and clinical usefulness in three sets, which was selected as the optimal predictive model. We mainly developed a fusion model (Radiology MLP model) that integrated radiology and radiomics features using MLP deep learning algorithm to predict vessels encapsulating tumor clusters (VETC) in hepatocellular carcinoma (HCC) patients, which yield an incremental value over the radiology and the MLP model.

Cui L, Xu M, Liu C, Liu T, Yan X, Zhang Y, Yang X

pubmed logopapersJul 7 2025
Class imbalance is a dominant challenge in medical image segmentation when dealing with MRI images from highly imbalanced datasets. This study introduces a comprehensive, multifaceted approach to enhance the accuracy and reliability of segmentation models under such conditions. Our model integrates advanced data augmentation, innovative algorithmic adjustments, and novel architectural features to address class label distribution effectively. To ensure the multiple aspects of training process, we have customized the data augmentation technique for medical imaging with multi-dimensional angles. The multi-dimensional augmentation technique helps to reduce the bias towards majority classes. We have implemented novel attention mechanisms, i.e., Enhanced Attention Module (EAM) and spatial attention. These attention mechanisms enhance the focus of the model on the most relevant features. Further, our architecture incorporates a dual decoder system and Pooling Integration Layer (PIL) to capture accurate foreground and background details. We also introduce a hybrid loss function, which is designed to handle the class imbalance by guiding the training process. For experimental purposes, we have used multiple datasets such as Digital Database Thyroid Image (DDTI), Breast Ultrasound Images Dataset (BUSI) and LiTS MICCAI 2017 to demonstrate the prowess of the proposed network using key evaluation metrics, i.e., IoU, Dice coefficient, precision, and recall.

Dai Y, Imami M, Hu R, Zhang C, Zhao L, Kargilis DC, Zhang H, Yu G, Liao WH, Jiao Z, Zhu C, Yang L, Bai HX

pubmed logopapersJul 7 2025
The unrelenting progression of Parkinson's disease (PD) leads to severely impaired quality of life, with considerable variability in progression rates among patients. Identifying biomarkers of PD progression could improve clinical monitoring and management. Radiomics, which facilitates data extraction from imaging for use in machine learning models, offers a promising approach to this challenge. This study investigated the use of multi-modality imaging, combining conventional magnetic resonance imaging (MRI) and dopamine transporter single photon emission computed tomography (DAT-SPECT), to predict motor progression in PD. Motor progression was measured by changes in the Movement Disorder Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS) motor subscale scores. Radiomic features were selected from the midbrain region in MRI and caudate nucleus, putamen, and ventral striatum in DAT-SPECT. Patients were stratified into fast progression vs. slow progression based on change in MDS-UPDRS in follow-up. Various feature selection methods and machine learning classifiers were evaluated for each modality, and the best-performing models were combined into an ensemble. On the internal test set, the ensemble model, which integrated clinical information, T1WI, T2WI and DAT-SPECT achieved a ROC AUC of 0.93 (95% CI: 0.80-1.00), PR AUC of 0.88 (95%CI 0.61-1.00), accuracy of 0.85 (95% CI: 0.70-0.89), sensitivity of 0.72 (95% CI: 0.43-1.00), and specificity of 0.92 (95% CI: 0.77-1.00). On the external test set, the ensemble model outperformed single-modality models with a ROC AUC of 0.77 (95% CI: 0.53-0.93), PR AUC of 0.79 (95% CI: 0.56-0.95), accuracy of 0.68 (95% CI: 0.50-0.86), sensitivity of 0.53 (95% CI: 0.27-0.82), and specificity of 0.82 (95% CI: 0.55-1.00). In conclusion, this study developed an imaging-based model to identify baseline characteristics predictive of disease progression in PD patients. The findings highlight the strength of using multiple imaging modalities and integrating imaging data with clinical information to enhance the prediction of motor progression in PD.

Pauling C, Laidlow-Singh H, Evans E, Garbera D, Williamson R, Fernando R, Thomas K, Martin H, Arthurs OJ, Shelmerdine SC

pubmed logopapersJul 7 2025
To determine the performance of a commercially available AI tool for fracture detection when used in children with osteogenesis imperfecta (OI). All appendicular and pelvic radiographs from an OI clinic at a single centre from 48 patients were included. Seven radiologists evaluated anonymised images in two rounds, first without, then with AI assistance. Differences in diagnostic accuracy between the rounds were analysed. 48 patients (mean 12 years) provided 336 images, containing 206 fractures established by consensus opinion of two radiologists. AI produced a per-examination accuracy of 74.8% [95% CI: 65.4%, 82.7%], compared to average radiologist performance at 83.4% [95% CI: 75.2%, 89.8%]. Radiologists using AI assistance improved average radiologist accuracy per examination to 90.7% [95% CI: 83.5%, 95.4%]. AI gave more false negatives than radiologists, with 80 missed fractures versus 41, respectively. Radiologists were more likely (74.6%) to alter their original decision to agree with AI at the per-image level, 82.8% of which led to a correct result, 64.0% of which were changing from a false positive to a true negative. Despite inferior standalone performance, AI assistance can still improve radiologist fracture detection in a rare disease paediatric population. Radiologists using AI typically led to more accurate diagnostic outcomes through reduced false positives. Future studies focusing on the real-world application of AI tools in a larger population of children with bone fragility disorders will help better evaluate whether these improvements in accuracy translate into improved patient outcomes. Question How well does a commercially available artificial intelligence (AI) tool identify fractures, on appendicular radiographs of children with osteogenesis imperfecta (OI), and can it also improve radiologists' identification of fractures in this population? Findings Specialist human radiologists outperformed the AI fracture detection tool when acting alone; however, their diagnostic performance overall improved with AI assistance. Clinical relevance AI assistance improves specialist radiologist fracture detection in children with osteogenesis imperfecta, even with AI performance alone inferior to the radiologists acting alone. The reason for this was due to the AI moderating the number of false positives generated by the radiologists.

Vinoth NAS, Kalaivani J, Arieth RM, Sivasakthiselvan S, Park GC, Joshi GP, Cho W

pubmed logopapersJul 7 2025
Lung and colon cancers (LCC) are among the foremost reasons for human death and disease. Early analysis of this disorder contains various tests, namely ultrasound (US), magnetic resonance imaging (MRI), and computed tomography (CT). Despite analytical imaging, histopathology is one of the effective methods that delivers cell-level imaging of tissue under inspection. These are mainly due to a restricted number of patients receiving final analysis and early healing. Furthermore, there are probabilities of inter-observer faults. Clinical informatics is an interdisciplinary field that integrates healthcare, information technology, and data analytics to improve patient care, clinical decision-making, and medical research. Recently, deep learning (DL) proved to be effective in the medical sector, and cancer diagnosis can be made automatically by utilizing the capabilities of artificial intelligence (AI), enabling faster analysis of more cases cost-effectively. On the other hand, with extensive technical developments, DL has arisen as an effective device in medical settings, mainly in medical imaging. This study presents an Enhanced Fusion of Transfer Learning Models and Optimization-Based Clinical Biomedical Imaging for Accurate Lung and Colon Cancer Diagnosis (FTLMO-BILCCD) model. The main objective of the FTLMO-BILCCD technique is to develop an efficient method for LCC detection using clinical biomedical imaging. Initially, the image pre-processing stage applies the median filter (MF) model to eliminate the unwanted noise from the input image data. Furthermore, fusion models such as CapsNet, EffcientNetV2, and MobileNet-V3 Large are employed for the feature extraction. The FTLMO-BILCCD technique implements a hybrid of temporal pattern attention and bidirectional gated recurrent unit (TPA-BiGRU) for classification. Finally, the beluga whale optimization (BWO) technique alters the hyperparameter range of the TPA-BiGRU model optimally and results in greater classification performance. The FTLMO-BILCCD approach is experimented with under the LCC-HI dataset. The performance validation of the FTLMO-BILCCD approach portrayed a superior accuracy value of 99.16% over existing models.

Liu H, Zhang J, Chen S, Ganesh A, Xu Y, Hu B, Menon BK, Qiu W

pubmed logopapersJul 7 2025
Collateral circulation is a critical determinant of clinical outcomes in acute ischemic stroke (AIS) patients and plays a key role in patient selection for endovascular therapy. This study aimed to develop an automated method for assessing and quantifying collateral circulation on multi-phase CT angiography, aiming to reduce observer variability and improve diagnostic efficiency. This retrospective study included mCTA images from 420 AIS patients within 14 hours of stroke symptom onset. A deep learning-based classification method with a tailored preprocessing module was developed to assess collateral circulation status. Manual evaluations using the simplified Menon method served as the ground truth. Model performance was assessed through five-fold cross-validation using metrics including accuracy, F1 score, precision, sensitivity, specificity, and the area under the receiver operating characteristic curve. The median age of the 420 patients was 73 years (IQR: 64-80 years; 222 men), and the median time from symptom onset to mCTA acquisition was 123 minutes (IQR: 79-245.5 minutes). The proposed framework achieved an accuracy of 87.6% for three-class collateral scores (good, intermediate, poor), with F1 score (85.7%), precision (83.8%), sensitivity (89.3%), specificity (92.9%), AUC (93.7%), ICC (0.832), and Kappa (0.781). For two-class collateral scores, we obtained 94.0% accuracy for good vs. non-good scores (F1 score(94.4%), precision (95.9%), sensitivity (93.0%), specificity (94.1%), AUC (97.1%),ICC(0.882),kappa(0.881)) and 97.1% for poor vs. non-poor scores (F1 score (98.5%), precision (98.0%), sensitivity (99.0%), specificity (84.8%), AUC (95.6%), ICC(0.740), kappa(0.738)). Additional analyses demonstrated that multi-phase CTA showed improved performance over single or two-phase CTA in collateral assessment. The proposed deep learning framework demonstrated high accuracy and consistency with radiologist-assigned scores for evaluating collateral circulation on multi-phase CTA in AIS patients. This method may offer a useful tool to aid clinical decision-making, reducing variability and improving diagnostic workflow. AIS = Acute Ischemic Stroke; mCTA = multi-phase Computed Tomography Angiography; DL = deep learning; AUC = area under the receiver operating characteristic curve; IQR = interquartile range; ROC = receiver operating characteristic.

Lilhore UK, Sunder R, Simaiya S, Alsafyani M, Monish Khan MD, Alroobaea R, Alsufyani H, Baqasah AM

pubmed logopapersJul 7 2025
Accurate segmentation of brain tumors from multimodal Magnetic Resonance Imaging (MRI) plays a critical role in diagnosis, treatment planning, and disease monitoring in neuro-oncology. Traditional methods of tumor segmentation, often manual and labour-intensive, are prone to inconsistencies and inter-observer variability. Recently, deep learning models, particularly Convolutional Neural Networks (CNNs), have shown great promise in automating this process. However, these models face challenges in terms of generalization across diverse datasets, accurate tumor boundary delineation, and uncertainty estimation. To address these challenges, we propose AG-MS3D-CNN, an attention-guided multiscale 3D convolutional neural network for brain tumor segmentation. Our model integrates local and global contextual information through multiscale feature extraction and leverages spatial attention mechanisms to enhance boundary delineation, particularly in complex tumor regions. We also introduce Monte Carlo dropout for uncertainty estimation, providing clinicians with confidence scores for each segmentation, which is crucial for informed decision-making. Furthermore, we adopt a multitask learning framework, which enables the simultaneous segmentation, classification, and volume estimation of tumors. To ensure robustness and generalizability across diverse MRI acquisition protocols and scanners, we integrate a domain adaptation module into the network. Extensive evaluations on the BraTS 2021 dataset and additional external datasets, such as OASIS, ADNI, and IXI, demonstrate the superior performance of AG-MS3D-CNN compared to existing state-of-the-art methods. Our model achieves high Dice scores and shows excellent robustness, making it a valuable tool for clinical decision support in neuro-oncology.

Mulliqi N, Blilie A, Ji X, Szolnoky K, Olsson H, Titus M, Martinez Gonzalez G, Boman SE, Valkonen M, Gudlaugsson E, Kjosavik SR, Asenjo J, Gambacorta M, Libretti P, Braun M, Kordek R, Łowicki R, Hotakainen K, Väre P, Pedersen BG, Sørensen KD, Ulhøi BP, Rantalainen M, Ruusuvuori P, Delahunt B, Samaratunga H, Tsuzuki T, Janssen EAM, Egevad L, Kartasalo K, Eklund M

pubmed logopapersJul 7 2025
Histopathological evaluation of prostate biopsies using the Gleason scoring system is critical for prostate cancer diagnosis and treatment selection. However, grading variability among pathologists can lead to inconsistent assessments, risking inappropriate treatment. Similar challenges complicate the assessment of other prognostic features like cribriform cancer morphology and perineural invasion. Many pathology departments are also facing an increasingly unsustainable workload due to rising prostate cancer incidence and a decreasing pathologist workforce coinciding with increasing requirements for more complex assessments and reporting. Digital pathology and artificial intelligence (AI) algorithms for analysing whole slide images show promise in improving the accuracy and efficiency of histopathological assessments. Studies have demonstrated AI's capability to diagnose and grade prostate cancer comparably to expert pathologists. However, external validations on diverse data sets have been limited and often show reduced performance. Historically, there have been no well-established guidelines for AI study designs and validation methods. Diagnostic assessments of AI systems often lack preregistered protocols and rigorous external cohort sampling, essential for reliable evidence of their safety and accuracy. This study protocol covers the retrospective validation of an AI system for prostate biopsy assessment. The primary objective of the study is to develop a high-performing and robust AI model for diagnosis and Gleason scoring of prostate cancer in core needle biopsies, and at scale evaluate whether it can generalise to fully external data from independent patients, pathology laboratories and digitalisation platforms. The secondary objectives cover AI performance in estimating cancer extent and detecting cribriform prostate cancer and perineural invasion. This protocol outlines the steps for data collection, predefined partitioning of data cohorts for AI model training and validation, model development and predetermined statistical analyses, ensuring systematic development and comprehensive validation of the system. The protocol adheres to Transparent Reporting of a multivariable prediction model of Individual Prognosis Or Diagnosis+AI (TRIPOD+AI), Protocol Items for External Cohort Evaluation of a Deep Learning System in Cancer Diagnostics (PIECES), Checklist for AI in Medical Imaging (CLAIM) and other relevant best practices. Data collection and usage were approved by the respective ethical review boards of each participating clinical laboratory, and centralised anonymised data handling was approved by the Swedish Ethical Review Authority. The study will be conducted in agreement with the Helsinki Declaration. The findings will be disseminated in peer-reviewed publications (open access).
Page 485 of 7527514 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,500+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.