Sort by:
Page 307 of 3433423 results

ICPPNet: A semantic segmentation network model based on inter-class positional prior for scoliosis reconstruction in ultrasound images.

Wang C, Zhou Y, Li Y, Pang W, Wang L, Du W, Yang H, Jin Y

pubmed logopapersJun 1 2025
Considering the radiation hazard of X-ray, safer, more convenient and cost-effective ultrasound methods are gradually becoming new diagnostic approaches for scoliosis. For ultrasound images of spine regions, it is challenging to accurately identify spine regions in images due to relatively small target areas and the presence of a lot of interfering information. Therefore, we developed a novel neural network that incorporates prior knowledge to precisely segment spine regions in ultrasound images. We constructed a dataset of ultrasound images of spine regions for semantic segmentation. The dataset contains 3136 images of 30 patients with scoliosis. And we propose a network model (ICPPNet), which fully utilizes inter-class positional prior knowledge by combining an inter-class positional probability heatmap, to achieve accurate segmentation of target areas. ICPPNet achieved an average Dice similarity coefficient of 70.83% and an average 95% Hausdorff distance of 11.28 mm on the dataset, demonstrating its excellent performance. The average error between the Cobb angle measured by our method and the Cobb angle measured by X-ray images is 1.41 degrees, and the coefficient of determination is 0.9879 with a strong correlation. ICPPNet provides a new solution for the medical image segmentation task with positional prior knowledge between target classes. And ICPPNet strongly supports the subsequent reconstruction of spine models using ultrasound images.

Fine-Tuning Deep Learning Model for Quantitative Knee Joint Mapping With MR Fingerprinting and Its Comparison to Dictionary Matching Method: Fine-Tuning Deep Learning Model for Quantitative MRF.

Zhang X, de Moura HL, Monga A, Zibetti MVW, Regatte RR

pubmed logopapersJun 1 2025
Magnetic resonance fingerprinting (MRF), as an emerging versatile and noninvasive imaging technique, provides simultaneous quantification of multiple quantitative MRI parameters, which have been used to detect changes in cartilage composition and structure in osteoarthritis. Deep learning (DL)-based methods for quantification mapping in MRF overcome the memory constraints and offer faster processing compared to the conventional dictionary matching (DM) method. However, limited attention has been given to the fine-tuning of neural networks (NNs) in DL and fair comparison with DM. In this study, we investigate the impact of training parameter choices on NN performance and compare the fine-tuned NN with DM for multiparametric mapping in MRF. Our approach includes optimizing NN hyperparameters, analyzing the singular value decomposition (SVD) components of MRF data, and optimization of the DM method. We conducted experiments on synthetic data, the NIST/ISMRM MRI system phantom with ground truth, and in vivo knee data from 14 healthy volunteers. The results demonstrate the critical importance of selecting appropriate training parameters, as these significantly affect NN performance. The findings also show that NNs improve the accuracy and robustness of T<sub>1</sub>, T<sub>2</sub>, and T<sub>1ρ</sub> mappings compared to DM in synthetic datasets. For in vivo knee data, the NN achieved comparable results for T<sub>1</sub>, with slightly lower T<sub>2</sub> and slightly higher T<sub>1ρ</sub> measurements compared to DM. In conclusion, the fine-tuned NN can be used to increase accuracy and robustness for multiparametric quantitative mapping from MRF of the knee joint.

A CT-free deep-learning-based attenuation and scatter correction for copper-64 PET in different time-point scans.

Adeli Z, Hosseini SA, Salimi Y, Vahidfar N, Sheikhzadeh P

pubmed logopapersJun 1 2025
This study aimed to develop and evaluate a deep-learning model for attenuation and scatter correction in whole-body 64Cu-based PET imaging. A swinUNETR model was implemented using the MONAI framework. Whole-body PET-nonAC and PET-CTAC image pairs were used for training, where PET-nonAC served as the input and PET-CTAC as the output. Due to the limited number of Cu-based PET/CT images, a model pre-trained on 51 Ga-PSMA PET images was fine-tuned on 15 Cu-based PET images via transfer learning. The model was trained without freezing layers, adapting learned features to the Cu-based dataset. For testing, six additional Cu-based PET images were used, representing 1-h, 12-h, and 48-h time points, with two images per group. The model performed best at the 12-h time point, with an MSE of 0.002 ± 0.0004 SUV<sup>2</sup>, PSNR of 43.14 ± 0.08 dB, and SSIM of 0.981 ± 0.002. At 48 h, accuracy slightly decreased (MSE = 0.036 ± 0.034 SUV<sup>2</sup>), but image quality remained high (PSNR = 44.49 ± 1.09 dB, SSIM = 0.981 ± 0.006). At 1 h, the model also showed strong results (MSE = 0.024 ± 0.002 SUV<sup>2</sup>, PSNR = 45.89 ± 5.23 dB, SSIM = 0.984 ± 0.005), demonstrating consistency across time points. Despite the limited size of the training dataset, the use of fine-tuning from a previously pre-trained model yielded acceptable performance. The results demonstrate that the proposed deep learning model can effectively generate PET-DLAC images that closely resemble PET-CTAC images, with only minor errors.

AI-supported approaches for mammography single and double reading: A controlled multireader study.

Brancato B, Magni V, Saieva C, Risso GG, Buti F, Catarzi S, Ciuffi F, Peruzzi F, Regini F, Ambrogetti D, Alabiso G, Cruciani A, Doronzio V, Frati S, Giannetti GP, Guerra C, Valente P, Vignoli C, Atzori S, Carrera V, D'Agostino G, Fazzini G, Picano E, Turini FM, Vani V, Fantozzi F, Vietro D, Cavallero D, Vietro F, Plataroti D, Schiaffino S, Cozzi A

pubmed logopapersJun 1 2025
To assess the impact of artificial intelligence (AI) on the diagnostic performance of radiologists with varying experience levels in mammography reading, considering single and simulated double reading approaches. In this retrospective study, 150 mammography examinations (30 with pathology-confirmed malignancies, 120 without malignancies [confirmed by 2-year follow-up]) were reviewed according to five approaches: A) human single reading by 26 radiologists of varying experience; B) AI single reading (Lunit INSIGHT MMG; C) human single reading with simultaneous AI support; D) simulated human-human double reading; E) simulated human-AI double reading, with AI as second independent reader flagging cases with a cancer probability ≥10 %. Sensitivity and specificity were calculated and compared using McNemar's test, univariate and multivariable logistic regression. Compared to single reading without AI support, single reading with simultaneous AI support improved mean sensitivity from 69.2 % (standard deviation [SD] 15.6) to 84.5 % (SD 8.1, p < 0.001), providing comparable mean specificity (91.8 % versus 90.8 %, p = 0.06). The sensitivity increase provided by the AI-supported single reading was largest in the group of radiologists with a sensitivity below the median in the non-supported single reading, from 56.7 % (SD 12.1) to 79.7 % (SD 10.2, p < 0.001). In the simulated human-AI double reading approach, sensitivity further increased to 91.8 % (SD 3.4), surpassing that of the human-human simulated double reading (87.4 %, SD 8.8, p = 0.016), with comparable mean specificity (from 84.0 % to 83.0 %, p = 0.17). AI support significantly enhanced sensitivity across all reading approaches, particularly benefiting worse performing radiologists. In the simulated double reading approaches, AI incorporation as independent second reader significantly increased sensitivity without compromising specificity.

Opportunistic assessment of osteoporosis using hip and pelvic X-rays with OsteoSight™: validation of an AI-based tool in a US population.

Pignolo RJ, Connell JJ, Briggs W, Kelly CJ, Tromans C, Sultana N, Brady JM

pubmed logopapersJun 1 2025
Identifying patients at risk of low bone mineral density (BMD) from X-rays presents an attractive approach to increase case finding. This paper showed the diagnostic accuracy, reproducibility, and robustness of a new technology: OsteoSight™. OsteoSight could increase diagnosis and preventive treatment rates for patients with low BMD. This study aimed to evaluate the diagnostic accuracy, reproducibility, and robustness of OsteoSight™, an automated image analysis tool designed to identify low bone mineral density (BMD) from routine hip and pelvic X-rays. Given the global rise in osteoporosis-related fractures and the limitations of current diagnostic paradigms, OsteoSight offers a scalable solution that integrates into existing clinical workflows. Performance of the technology was tested across three key areas: (1) diagnostic accuracy in identifying low BMD as compared to dual-energy X-ray absorptiometry (DXA), the clinical gold standard; (2) reproducibility, through analysis of two images from the same patient; and (3) robustness, by evaluating the tool's performance across different patient demographics and X-ray scanner hardware. The diagnostic accuracy of OsteoSight for identifying patients at risk of low BMD was area under the receiver operating characteristic curve (AUROC) 0.834 [0.789-0.880], with consistent results across subgroups of clinical confounders and X-ray scanner hardware. Specificity 0.852 [0.783-0.930] and sensitivity 0.628 [0.538-0.743] met pre-specified acceptance criteria. The pre-processing pipeline successfully excluded unsuitable cases including incorrect body parts, metalwork, and unacceptable femur positioning. The results demonstrate that OsteoSight is accurate in identifying patients with low BMD. This suggests its utility as an opportunistic assessment tool, especially in settings where DXA accessibility is limited or not recently performed. The tool's reproducibility and robust performance across various clinical confounders further supports its integration into routine orthopedic and medical practices, potentially broadening the reach of osteoporosis assessment and enabling earlier intervention for at-risk patients.

FedSynthCT-Brain: A federated learning framework for multi-institutional brain MRI-to-CT synthesis.

Raggio CB, Zabaleta MK, Skupien N, Blanck O, Cicone F, Cascini GL, Zaffino P, Migliorelli L, Spadea MF

pubmed logopapersJun 1 2025
The generation of Synthetic Computed Tomography (sCT) images has become a pivotal methodology in modern clinical practice, particularly in the context of Radiotherapy (RT) treatment planning. The use of sCT enables the calculation of doses, pushing towards Magnetic Resonance Imaging (MRI) guided radiotherapy treatments. Moreover, with the introduction of MRI-Positron Emission Tomography (PET) hybrid scanners, the derivation of sCT from MRI can improve the attenuation correction of PET images. Deep learning methods for MRI-to-sCT have shown promising results, but their reliance on single-centre training dataset limits generalisation capabilities to diverse clinical settings. Moreover, creating centralised multi-centre datasets may pose privacy concerns. To address the aforementioned issues, we introduced FedSynthCT-Brain, an approach based on the Federated Learning (FL) paradigm for MRI-to-sCT in brain imaging. This is among the first applications of FL for MRI-to-sCT, employing a cross-silo horizontal FL approach that allows multiple centres to collaboratively train a U-Net-based deep learning model. We validated our method using real multicentre data from four European and American centres, simulating heterogeneous scanner types and acquisition modalities, and tested its performance on an independent dataset from a centre outside the federation. In the case of the unseen centre, the federated model achieved a median Mean Absolute Error (MAE) of 102.0 HU across 23 patients, with an interquartile range of 96.7-110.5 HU. The median (interquartile range) for the Structural Similarity Index (SSIM) and the Peak Signal to Noise Ratio (PNSR) were 0.89 (0.86-0.89) and 26.58 (25.52-27.42), respectively. The analysis of the results showed acceptable performances of the federated approach, thus highlighting the potential of FL to enhance MRI-to-sCT to improve generalisability and advancing safe and equitable clinical applications while fostering collaboration and preserving data privacy.

An Intelligent Model of Segmentation and Classification Using Enhanced Optimization-Based Attentive Mask RCNN and Recurrent MobileNet With LSTM for Multiple Sclerosis Types With Clinical Brain MRI.

Gopichand G, Bhargavi KN, Ramprasad MVS, Kodavanti PV, Padmavathi M

pubmed logopapersJun 1 2025
In healthcare sector, magnetic resonance imaging (MRI) images are taken for multiple sclerosis (MS) assessment, classification, and management. However, interpreting an MRI scan requires an exceptional amount of skill because abnormalities on scans are frequently inconsistent with clinical symptoms, making it difficult to convert the findings into effective treatment strategies. Furthermore, MRI is an expensive process, and its frequent utilization to monitor an illness increases healthcare costs. To overcome these drawbacks, this research employs advanced technological approaches to develop a deep learning system for classifying types of MS through clinical brain MRI scans. The major innovation of this model is to influence the convolution network with attention concept and recurrent-based deep learning for classifying the disorder; this also proposes an optimization algorithm for tuning the parameter to enhance the performance. Initially, the total images as 3427 are collected from database, in which the collected samples are categorized for training and testing phase. Here, the segmentation is carried out by adaptive and attentive-based mask regional convolution neural network (AA-MRCNN). In this phase, the MRCNN's parameters are finely tuned with an enhanced pine cone optimization algorithm (EPCOA) to guarantee outstanding efficiency. Further, the segmented image is given to recurrent MobileNet with long short term memory (RM-LSTM) for getting the classification outcomes. Through experimental analysis, this deep learning model is acquired 95.4% for accuracy, 95.3% for sensitivity, and 95.4% for specificity. Hence, these results prove that it has high potential for appropriately classifying the sclerosis disorder.

Visceral Fat Quantified by a Fully Automated Deep-Learning Algorithm and Risk of Incident and Recurrent Diverticulitis.

Ha J, Bridge CP, Andriole KP, Kambadakone A, Clark MJ, Narimiti A, Rosenthal MH, Fintelmann FJ, Gollub RL, Giovannucci EL, Strate LL, Ma W, Chan AT

pubmed logopapersJun 1 2025
Obesity is a risk factor for diverticulitis. However, it remains unclear whether visceral fat area, a more precise measurement of abdominal fat, is associated with the risk of diverticulitis. To estimate the risk of incident and recurrent diverticulitis according to visceral fat area. A retrospective cohort study. The Mass General Brigham Biobank. A total of 6654 patients who underwent abdominal CT for clinical indications and had no diagnosis of diverticulitis, IBD, or cancer before the scan were included. Visceral fat area, subcutaneous fat area, and skeletal muscle area were quantified using a deep-learning model applied to abdominal CT. The main exposures were z -scores of body composition metrics normalized by age, sex, and race. Diverticulitis cases were identified using the International Classification of Diseases codes for the primary or admitting diagnosis from the electronic health records. The risks of incident diverticulitis, complicated diverticulitis, and recurrent diverticulitis requiring hospitalization according to quartiles of body composition metrics z -scores were estimated. A higher visceral fat area z -score was associated with an increased risk of incident diverticulitis (multivariable HR comparing the highest vs lowest quartile, 2.09; 95% CI, 1.48-2.95; p for trend <0.0001), complicated diverticulitis (HR, 2.56; 95% CI, 1.10-5.99; p for trend = 0.02), and recurrence requiring hospitalization (HR, 2.76; 95% CI, 1.15-6.62; p for trend = 0.03). The association between visceral fat area and diverticulitis was not materially different among different strata of BMI. Subcutaneous fat area and skeletal muscle area were not significantly associated with diverticulitis. The study population was limited to individuals who underwent CT scans for medical indication. Higher visceral fat area derived from CT was associated with incident and recurrent diverticulitis. Our findings provide insight into the underlying pathophysiology of diverticulitis and may have implications for preventive strategies. See Video Abstract . ANTECEDENTES:La obesidad es un factor de riesgo de la diverticulitis. Sin embargo, sigue sin estar claro si el área de grasa visceral, con medida más precisa de la grasa abdominal esté asociada con el riesgo de diverticulitis.OBJETIVO:Estimar el riesgo de diverticulitis incidente y recurrente de acuerdo con el área de grasa visceral.DISEÑO:Un estudio de cohorte retrospectivo.AJUSTE:El Biobanco Mass General Brigham.PACIENTES:6.654 pacientes sometidos a una TC abdominal por indicaciones clínicas y sin diagnóstico de diverticulitis, enfermedad inflamatoria intestinal o cáncer antes de la exploración.PRINCIPALES MEDIDAS DE RESULTADOS:Se cuantificaron, área de grasa visceral, área de grasa subcutánea y área de músculo esquelético, utilizando un modelo de aprendizaje profundo aplicado a la TC abdominal. Las principales exposiciones fueron puntuaciones z de métricas de composición corporal, normalizadas por edad, sexo y raza. Los casos de diverticulitis se definieron con los códigos ICD para el diagnóstico primario o de admisión de los registros de salud electrónicos. Se estimaron los riesgos de diverticulitis incidente, diverticulitis complicada y diverticulitis recurrente que requiriera hospitalización según los cuartiles de las puntuaciones z de las métricas de composición corporal.RESULTADOS:Una puntuación z más alta del área de grasa visceral se asoció con un mayor riesgo de diverticulitis incidente (HR multivariable que compara el cuartil más alto con el más bajo, 2,09; IC del 95 %, 1,48-2,95; P para la tendencia < 0,0001), diverticulitis complicada (HR, 2,56; IC del 95 %, 1,10-5,99; P para la tendencia = 0,02) y recurrencia que requiriera hospitalización (HR, 2,76; IC del 95 %, 1,15-6,62; P para la tendencia = 0,03). La asociación entre el área de grasa visceral y la diverticulitis no fue materialmente diferente entre los diferentes estratos del índice de masa corporal. El área de grasa subcutánea y el área del músculo esquelético no se asociaron significativamente con la diverticulitis.LIMITACIONES:La población del estudio se limitó a individuos sometidos a tomografías computarizadas por indicación médica.CONCLUSIÓN:Una mayor área de grasa visceral derivada de la tomografía computarizada se asoció con diverticulitis incidente y recurrente. Nuestros hallazgos brindan información sobre la fisiopatología subyacente de la diverticulitis y pueden tener implicaciones para las estrategias preventivas. (Traducción: Dr. Fidel Ruiz Healy ).

Pediatric chest X-ray diagnosis using neuromorphic models.

Bokhari SM, Sohaib S, Shafi M

pubmed logopapersJun 1 2025
This research presents an innovative neuromorphic method utilizing Spiking Neural Networks (SNNs) to analyze pediatric chest X-rays (PediCXR) to identify prevalent thoracic illnesses. We incorporate spiking-based machine learning models such as Spiking Convolutional Neural Networks (SCNN), Spiking Residual Networks (S-ResNet), and Hierarchical Spiking Neural Networks (HSNN), for pediatric chest radiographic analysis utilizing the publically available benchmark PediCXR dataset. These models employ spatiotemporal feature extraction, residual connections, and event-driven processing to improve diagnostic precision. The HSNN model surpasses benchmark approaches from the literature, with a classification accuracy of 96% across six thoracic illness categories, with an F1-score of 0.95 and a specificity of 1.0 in pneumonia detection. Our research demonstrates that neuromorphic computing is a feasible and biologically inspired approach to real-time medical imaging diagnostics, significantly improving performance.

Semi-supervised spatial-frequency transformer for metal artifact reduction in maxillofacial CT and evaluation with intraoral scan.

Li Y, Ma C, Li Z, Wang Z, Han J, Shan H, Liu J

pubmed logopapersJun 1 2025
To develop a semi-supervised domain adaptation technique for metal artifact reduction with a spatial-frequency transformer (SFTrans) model (Semi-SFTrans), and to quantitatively compare its performance with supervised models (Sup-SFTrans and ResUNet) and traditional linear interpolation MAR method (LI) in oral and maxillofacial CT. Supervised models, including Sup-SFTrans and a state-of-the-art model termed ResUNet, were trained with paired simulated CT images, while semi-supervised model, Semi-SFTrans, was trained with both paired simulated and unpaired clinical CT images. For evaluation on the simulated data, we calculated Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) on the images corrected by four methods: LI, ResUNet, Sup-SFTrans, and Semi-SFTrans. For evaluation on the clinical data, we collected twenty-two clinical cases with real metal artifacts, and the corresponding intraoral scan data. Three radiologists visually assessed the severity of artifacts using Likert scales on the original, Sup-SFTrans-corrected, and Semi-SFTrans-corrected images. Quantitative MAR evaluation was conducted by measuring Mean Hounsfield Unit (HU) values, standard deviations, and Signal-to-Noise Ratios (SNRs) across Regions of Interest (ROIs) such as the tongue, bilateral buccal, lips, and bilateral masseter muscles, using paired t-tests and Wilcoxon signed-rank tests. Further, teeth integrity in the corrected images was assessed by comparing teeth segmentation results from the corrected images against the ground-truth segmentation derived from registered intraoral scan data, using Dice Score and Hausdorff Distance. Sup-SFTrans outperformed LI, ResUNet and Semi-SFTrans on the simulated dataset. Visual assessments from the radiologists showed that average scores were (2.02 ± 0.91) for original CT, (4.46 ± 0.51) for Semi-SFTrans CT, and (3.64 ± 0.90) for Sup-SFTrans CT, with intra correlation coefficients (ICCs)>0.8 of all groups and p < 0.001 between groups. On soft tissue, both Semi-SFTrans and Sup-SFTrans significantly reduced metal artifacts in tongue (p < 0.001), lips, bilateral buccal regions, and masseter muscle areas (p < 0.05). Semi-SFTrans achieved superior metal artifact reduction than Sup-SFTrans in all ROIs (p < 0.001). SNR results indicated significant differences between Semi-SFTrans and Sup-SFTrans in tongue (p = 0.0391), bilateral buccal (p = 0.0067), lips (p = 0.0208), and bilateral masseter muscle areas (p = 0.0031). Notably, Semi-SFTrans demonstrated better teeth integrity preservation than Sup-SFTrans (Dice Score: p < 0.001; Hausdorff Distance: p = 0.0022). The semi-supervised MAR model, Semi-SFTrans, demonstrated superior metal artifact reduction performance over supervised counterparts in real dental CT images.
Page 307 of 3433423 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.