Sort by:
Page 47 of 1391387 results

FedSynthCT-Brain: A federated learning framework for multi-institutional brain MRI-to-CT synthesis.

Raggio CB, Zabaleta MK, Skupien N, Blanck O, Cicone F, Cascini GL, Zaffino P, Migliorelli L, Spadea MF

pubmed logopapersJun 1 2025
The generation of Synthetic Computed Tomography (sCT) images has become a pivotal methodology in modern clinical practice, particularly in the context of Radiotherapy (RT) treatment planning. The use of sCT enables the calculation of doses, pushing towards Magnetic Resonance Imaging (MRI) guided radiotherapy treatments. Moreover, with the introduction of MRI-Positron Emission Tomography (PET) hybrid scanners, the derivation of sCT from MRI can improve the attenuation correction of PET images. Deep learning methods for MRI-to-sCT have shown promising results, but their reliance on single-centre training dataset limits generalisation capabilities to diverse clinical settings. Moreover, creating centralised multi-centre datasets may pose privacy concerns. To address the aforementioned issues, we introduced FedSynthCT-Brain, an approach based on the Federated Learning (FL) paradigm for MRI-to-sCT in brain imaging. This is among the first applications of FL for MRI-to-sCT, employing a cross-silo horizontal FL approach that allows multiple centres to collaboratively train a U-Net-based deep learning model. We validated our method using real multicentre data from four European and American centres, simulating heterogeneous scanner types and acquisition modalities, and tested its performance on an independent dataset from a centre outside the federation. In the case of the unseen centre, the federated model achieved a median Mean Absolute Error (MAE) of 102.0 HU across 23 patients, with an interquartile range of 96.7-110.5 HU. The median (interquartile range) for the Structural Similarity Index (SSIM) and the Peak Signal to Noise Ratio (PNSR) were 0.89 (0.86-0.89) and 26.58 (25.52-27.42), respectively. The analysis of the results showed acceptable performances of the federated approach, thus highlighting the potential of FL to enhance MRI-to-sCT to improve generalisability and advancing safe and equitable clinical applications while fostering collaboration and preserving data privacy.

Opportunistic assessment of osteoporosis using hip and pelvic X-rays with OsteoSight™: validation of an AI-based tool in a US population.

Pignolo RJ, Connell JJ, Briggs W, Kelly CJ, Tromans C, Sultana N, Brady JM

pubmed logopapersJun 1 2025
Identifying patients at risk of low bone mineral density (BMD) from X-rays presents an attractive approach to increase case finding. This paper showed the diagnostic accuracy, reproducibility, and robustness of a new technology: OsteoSight™. OsteoSight could increase diagnosis and preventive treatment rates for patients with low BMD. This study aimed to evaluate the diagnostic accuracy, reproducibility, and robustness of OsteoSight™, an automated image analysis tool designed to identify low bone mineral density (BMD) from routine hip and pelvic X-rays. Given the global rise in osteoporosis-related fractures and the limitations of current diagnostic paradigms, OsteoSight offers a scalable solution that integrates into existing clinical workflows. Performance of the technology was tested across three key areas: (1) diagnostic accuracy in identifying low BMD as compared to dual-energy X-ray absorptiometry (DXA), the clinical gold standard; (2) reproducibility, through analysis of two images from the same patient; and (3) robustness, by evaluating the tool's performance across different patient demographics and X-ray scanner hardware. The diagnostic accuracy of OsteoSight for identifying patients at risk of low BMD was area under the receiver operating characteristic curve (AUROC) 0.834 [0.789-0.880], with consistent results across subgroups of clinical confounders and X-ray scanner hardware. Specificity 0.852 [0.783-0.930] and sensitivity 0.628 [0.538-0.743] met pre-specified acceptance criteria. The pre-processing pipeline successfully excluded unsuitable cases including incorrect body parts, metalwork, and unacceptable femur positioning. The results demonstrate that OsteoSight is accurate in identifying patients with low BMD. This suggests its utility as an opportunistic assessment tool, especially in settings where DXA accessibility is limited or not recently performed. The tool's reproducibility and robust performance across various clinical confounders further supports its integration into routine orthopedic and medical practices, potentially broadening the reach of osteoporosis assessment and enabling earlier intervention for at-risk patients.

AI-supported approaches for mammography single and double reading: A controlled multireader study.

Brancato B, Magni V, Saieva C, Risso GG, Buti F, Catarzi S, Ciuffi F, Peruzzi F, Regini F, Ambrogetti D, Alabiso G, Cruciani A, Doronzio V, Frati S, Giannetti GP, Guerra C, Valente P, Vignoli C, Atzori S, Carrera V, D'Agostino G, Fazzini G, Picano E, Turini FM, Vani V, Fantozzi F, Vietro D, Cavallero D, Vietro F, Plataroti D, Schiaffino S, Cozzi A

pubmed logopapersJun 1 2025
To assess the impact of artificial intelligence (AI) on the diagnostic performance of radiologists with varying experience levels in mammography reading, considering single and simulated double reading approaches. In this retrospective study, 150 mammography examinations (30 with pathology-confirmed malignancies, 120 without malignancies [confirmed by 2-year follow-up]) were reviewed according to five approaches: A) human single reading by 26 radiologists of varying experience; B) AI single reading (Lunit INSIGHT MMG; C) human single reading with simultaneous AI support; D) simulated human-human double reading; E) simulated human-AI double reading, with AI as second independent reader flagging cases with a cancer probability ≥10 %. Sensitivity and specificity were calculated and compared using McNemar's test, univariate and multivariable logistic regression. Compared to single reading without AI support, single reading with simultaneous AI support improved mean sensitivity from 69.2 % (standard deviation [SD] 15.6) to 84.5 % (SD 8.1, p < 0.001), providing comparable mean specificity (91.8 % versus 90.8 %, p = 0.06). The sensitivity increase provided by the AI-supported single reading was largest in the group of radiologists with a sensitivity below the median in the non-supported single reading, from 56.7 % (SD 12.1) to 79.7 % (SD 10.2, p < 0.001). In the simulated human-AI double reading approach, sensitivity further increased to 91.8 % (SD 3.4), surpassing that of the human-human simulated double reading (87.4 %, SD 8.8, p = 0.016), with comparable mean specificity (from 84.0 % to 83.0 %, p = 0.17). AI support significantly enhanced sensitivity across all reading approaches, particularly benefiting worse performing radiologists. In the simulated double reading approaches, AI incorporation as independent second reader significantly increased sensitivity without compromising specificity.

A CT-free deep-learning-based attenuation and scatter correction for copper-64 PET in different time-point scans.

Adeli Z, Hosseini SA, Salimi Y, Vahidfar N, Sheikhzadeh P

pubmed logopapersJun 1 2025
This study aimed to develop and evaluate a deep-learning model for attenuation and scatter correction in whole-body 64Cu-based PET imaging. A swinUNETR model was implemented using the MONAI framework. Whole-body PET-nonAC and PET-CTAC image pairs were used for training, where PET-nonAC served as the input and PET-CTAC as the output. Due to the limited number of Cu-based PET/CT images, a model pre-trained on 51 Ga-PSMA PET images was fine-tuned on 15 Cu-based PET images via transfer learning. The model was trained without freezing layers, adapting learned features to the Cu-based dataset. For testing, six additional Cu-based PET images were used, representing 1-h, 12-h, and 48-h time points, with two images per group. The model performed best at the 12-h time point, with an MSE of 0.002 ± 0.0004 SUV<sup>2</sup>, PSNR of 43.14 ± 0.08 dB, and SSIM of 0.981 ± 0.002. At 48 h, accuracy slightly decreased (MSE = 0.036 ± 0.034 SUV<sup>2</sup>), but image quality remained high (PSNR = 44.49 ± 1.09 dB, SSIM = 0.981 ± 0.006). At 1 h, the model also showed strong results (MSE = 0.024 ± 0.002 SUV<sup>2</sup>, PSNR = 45.89 ± 5.23 dB, SSIM = 0.984 ± 0.005), demonstrating consistency across time points. Despite the limited size of the training dataset, the use of fine-tuning from a previously pre-trained model yielded acceptable performance. The results demonstrate that the proposed deep learning model can effectively generate PET-DLAC images that closely resemble PET-CTAC images, with only minor errors.

High-Performance Computing-Based Brain Tumor Detection Using Parallel Quantum Dilated Convolutional Neural Network.

Shinde SS, Pande A

pubmed logopapersJun 1 2025
In the healthcare field, brain tumor causes irregular development of cells in the brain. One of the popular ways to identify the brain tumor and its progression is magnetic resonance imaging (MRI). However, existing methods often suffer from high computational complexity, noise interference, and limited accuracy, which affect the early diagnosis of brain tumor. For resolving such issues, a high-performance computing model, such as big data-based detection, is utilized. As a result, this work proposes a novel approach named parallel quantum dilated convolutional neural network (PQDCNN)-based brain tumor detection using the Map-Reducer. The data partitioning is the prime process, which is done using the Fuzzy local information C-means clustering (FLICM). The partitioned data is subjected to the map reducer. In the mapper, the Medav filtering removes the noise, and the tumor area segmentation is done by a transformer model named TransBTSV2. After segmenting the tumor part, image augmentation and feature extraction are done. In the reducer phase, the brain tumor is detected using the proposed PQDCNN. Furthermore, the efficiency of PQDCNN is validated using the accuracy, sensitivity, and specificity metrics, and the ideal values of 91.52%, 91.69%, and 92.26% are achieved.

Fine-Tuning Deep Learning Model for Quantitative Knee Joint Mapping With MR Fingerprinting and Its Comparison to Dictionary Matching Method: Fine-Tuning Deep Learning Model for Quantitative MRF.

Zhang X, de Moura HL, Monga A, Zibetti MVW, Regatte RR

pubmed logopapersJun 1 2025
Magnetic resonance fingerprinting (MRF), as an emerging versatile and noninvasive imaging technique, provides simultaneous quantification of multiple quantitative MRI parameters, which have been used to detect changes in cartilage composition and structure in osteoarthritis. Deep learning (DL)-based methods for quantification mapping in MRF overcome the memory constraints and offer faster processing compared to the conventional dictionary matching (DM) method. However, limited attention has been given to the fine-tuning of neural networks (NNs) in DL and fair comparison with DM. In this study, we investigate the impact of training parameter choices on NN performance and compare the fine-tuned NN with DM for multiparametric mapping in MRF. Our approach includes optimizing NN hyperparameters, analyzing the singular value decomposition (SVD) components of MRF data, and optimization of the DM method. We conducted experiments on synthetic data, the NIST/ISMRM MRI system phantom with ground truth, and in vivo knee data from 14 healthy volunteers. The results demonstrate the critical importance of selecting appropriate training parameters, as these significantly affect NN performance. The findings also show that NNs improve the accuracy and robustness of T<sub>1</sub>, T<sub>2</sub>, and T<sub>1ρ</sub> mappings compared to DM in synthetic datasets. For in vivo knee data, the NN achieved comparable results for T<sub>1</sub>, with slightly lower T<sub>2</sub> and slightly higher T<sub>1ρ</sub> measurements compared to DM. In conclusion, the fine-tuned NN can be used to increase accuracy and robustness for multiparametric quantitative mapping from MRF of the knee joint.

ICPPNet: A semantic segmentation network model based on inter-class positional prior for scoliosis reconstruction in ultrasound images.

Wang C, Zhou Y, Li Y, Pang W, Wang L, Du W, Yang H, Jin Y

pubmed logopapersJun 1 2025
Considering the radiation hazard of X-ray, safer, more convenient and cost-effective ultrasound methods are gradually becoming new diagnostic approaches for scoliosis. For ultrasound images of spine regions, it is challenging to accurately identify spine regions in images due to relatively small target areas and the presence of a lot of interfering information. Therefore, we developed a novel neural network that incorporates prior knowledge to precisely segment spine regions in ultrasound images. We constructed a dataset of ultrasound images of spine regions for semantic segmentation. The dataset contains 3136 images of 30 patients with scoliosis. And we propose a network model (ICPPNet), which fully utilizes inter-class positional prior knowledge by combining an inter-class positional probability heatmap, to achieve accurate segmentation of target areas. ICPPNet achieved an average Dice similarity coefficient of 70.83% and an average 95% Hausdorff distance of 11.28 mm on the dataset, demonstrating its excellent performance. The average error between the Cobb angle measured by our method and the Cobb angle measured by X-ray images is 1.41 degrees, and the coefficient of determination is 0.9879 with a strong correlation. ICPPNet provides a new solution for the medical image segmentation task with positional prior knowledge between target classes. And ICPPNet strongly supports the subsequent reconstruction of spine models using ultrasound images.

Automated engineered-stone silicosis screening and staging using Deep Learning with X-rays.

Priego-Torres B, Sanchez-Morillo D, Khalili E, Conde-Sánchez MÁ, García-Gámez A, León-Jiménez A

pubmed logopapersJun 1 2025
Silicosis, a debilitating occupational lung disease caused by inhaling crystalline silica, continues to be a significant global health issue, especially with the increasing use of engineered stone (ES) surfaces containing high silica content. Traditional diagnostic methods, dependent on radiological interpretation, have low sensitivity, especially, in the early stages of the disease, and present variability between evaluators. This study explores the efficacy of deep learning techniques in automating the screening and staging of silicosis using chest X-ray images. Utilizing a comprehensive dataset, obtained from the medical records of a cohort of workers exposed to artificial quartz conglomerates, we implemented a preprocessing stage for rib-cage segmentation, followed by classification using state-of-the-art deep learning models. The segmentation model exhibited high precision, ensuring accurate identification of thoracic structures. In the screening phase, our models achieved near-perfect accuracy, with ROC AUC values reaching 1.0, effectively distinguishing between healthy individuals and those with silicosis. The models demonstrated remarkable precision in the staging of the disease. Nevertheless, differentiating between simple silicosis and progressive massive fibrosis, the evolved and complicated form of the disease, presented certain difficulties, especially during the transitional period, when assessment can be significantly subjective. Notwithstanding these difficulties, the models achieved an accuracy of around 81% and ROC AUC scores nearing 0.93. This study highlights the potential of deep learning to generate clinical decision support tools to increase the accuracy and effectiveness in the diagnosis and staging of silicosis, whose early detection would allow the patient to be moved away from all sources of occupational exposure, therefore constituting a substantial advancement in occupational health diagnostics.

Deep learning-driven multi-class classification of brain strokes using computed tomography: A step towards enhanced diagnostic precision.

Kulathilake CD, Udupihille J, Abeysundara SP, Senoo A

pubmed logopapersJun 1 2025
To develop and validate deep learning models leveraging CT imaging for the prediction and classification of brain stroke conditions, with the potential to enhance accuracy and support clinical decision-making. This retrospective, bi-center study included data from 250 patients, with a dataset of 8186 CT images collected from 2017 to 2022. Two AI models were developed using the Expanded ResNet101 deep learning framework as a two-step model. Model performance was evaluated using confusion matrices, supplemented by external validation with an independent dataset. External validation was conducted by an expert and two external members. Overall accuracy, confidence intervals, Cohen's Kappa value, and McNemar's test P-values were calculated. A total of 8186 CT images were incorporated, with 6386 images used for the training and 900 datasets for testing and validation in Model 01. Further, 1619 CT images were used for training and 600 datasets for testing and validation in Model 02. The average accuracy, precision, and F1 score for both models were assessed: Model 01 achieved 99.6 %, 99.4 %, and 99.6 % respectively, whereas Model 02 achieved 99.2 %, 98.8 %, and 99.1 %. The external validation accuracies were 78.6 % (95 % CI: 0.73,0.83; P < 0.001) and 60.2 % (95 % CI: 0.48,0.70; P < 0.001) for Models 01 and 02 respectively, as evaluated by the expert. Deep learning models demonstrated high accuracy, precision, and F1 scores in predicting outcomes for brain stroke patients. With larger cohort and diverse radiologic mimics, these models could support clinicians in prognosis and decision-making.

A radiomics approach to distinguish Progressive Supranuclear Palsy Richardson's syndrome from other phenotypes starting from MR images.

Pisani N, Abate F, Avallone AR, Barone P, Cesarelli M, Amato F, Picillo M, Ricciardi C

pubmed logopapersJun 1 2025
Progressive Supranuclear Palsy (PSP) is an uncommon neurodegenerative disorder with different clinical onset, including Richardson's syndrome (PSP-RS) and other variant phenotypes (vPSP). Recognising the clinical progression of different phenotypes would enhance the accuracy of detection and treatment of PSP. The study goal was to identify radiomic biomarkers for distinguishing PSP phenotypes extracted from T1-weighted magnetic resonance images (MRI). Forty PSP patients (20 PSP-RS and 20 vPSP) took part in the present work. Radiomic features were collected from 21 regions of interest (ROIs) mainly from frontal cortex, supratentorial white matter, basal nuclei, brainstem, cerebellum, 3rd and 4th ventricles. After features selection, three tree-based machine learning (ML) classifiers were implemented to classify PSP phenotypes. 10 out of 21 ROIs performed best about sensitivity, specificity, accuracy and area under the receiver operating characteristic curve (AUCROC). Particularly, features extracted from the pons region obtained the best accuracy (0.92) and AUCROC (0.83) values while by using the other 10 ROIs, evaluation metrics range from 0.67 to 0.83. Eight features of the Gray Level Dependence Matrix were recurrently extracted for the 10 ROIs. Furthermore, by combining these ROIs, the results exceeded 0.83 in phenotypes classification and the selected areas were brain stem, pons, occipital white matter, precentral gyrus and thalamus regions. Based on the achieved results, our proposed approach could represent a promising tool for distinguishing PSP-RS from vPSP.
Page 47 of 1391387 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.