Sort by:
Page 237 of 6576562 results

Bülbül O, Bülbül HM, Göksel S

pubmed logopapersAug 22 2025
Breast cancer is the most common cancer and the leading cause of cancer-related deaths in women. Texture analysis provides crucial prognostic information about many types of cancer, including breast cancer. The aim was to examine the relationship between texture features (TFs) of 2-deoxy-2[<sup>18</sup>F] fluoro-D-glucose positron emission tomography (PET)/computed tomography and disease progression in patients with invasive breast cancer. TFs of the primary malignant lesion were extracted from PET images of 112 patients. TFs that showed significant differences between patients who achieved one-, three-, and five-year progression-free survival (PFS) and those who did not were selected and subjected to the least absolute shrinkage and selection operator regression method to reduce features and prevent overfitting. Machine learning (ML) was used to predict PFS using TFs and selected clinicopathological parameters. In models using only TFs, random forest predicted one-, three-, and five-year PFS with area under the curve (AUC) values of 0.730, 0.758, and 0.797, respectively. Naive Bayes predicted one-, three-, and five-year PFS with AUC values of 0.857, 0.804, and 0.843, respectively. The neural network predicted one-, three-, and five-year PFS with AUC values of 0.782, 0.828, and 0.780, respectively. These findings indicated increased AUC values when the models combined TFs with clinicopathological parameters. The lowest AUC values of the models combining TFs and clinicopathological parameters when predicting one-year, three-year, and five-year PFS were 0.867, 0.898, and 0.867, respectively. ML models incorporating PET-derived TFs and clinical parameters may assist in predicting progression during the pre-treatment period in patients with invasive breast carcinoma.

Colligiani L, Marzi C, Uggenti V, Colantonio S, Tavanti L, Pistelli F, Alì G, Neri E, Romei C

pubmed logopapersAug 22 2025
To differentiate interstitial lung diseases (ILDs) with fibrotic and inflammatory patterns using high-resolution computed tomography (HRCT) and a radiomics-based artificial intelligence (AI) pipeline. This single-center study included 84 patients: 50 with idiopathic pulmonary fibrosis (IPF)-representative of fibrotic pattern-and 34 with cellular non-specific interstitial pneumonia (NSIP) secondary to connective tissue disease (CTD)-as an example of mostly inflammatory pattern. For a secondary objective, we analyzed 50 additional patients with COVID-19 pneumonia. We performed semi-automatic segmentation of ILD regions using a deep learning model followed by manual review. From each segmented region, 103 radiomic features were extracted. Classification was performed using an XGBoost model with 1000 bootstrap repetitions and SHapley Additive exPlanations (SHAP) were applied to identify the most predictive features. The model accurately distinguished a fibrotic ILD pattern from an inflammatory ILD one, achieving an average test set accuracy of 0.91 and AUROC of 0.98. The classification was driven by radiomic features capturing differences in lung morphology, intensity distribution, and textural heterogeneity between the two disease patterns. In differentiating cellular NSIP from COVID-19, the model achieved an average accuracy of 0.89. Inflammatory ILDs exhibited more uniform imaging patterns compared to the greater variability typically observed in viral pneumonia. Radiomics combined with explainable AI offers promising diagnostic support in distinguishing fibrotic from inflammatory ILD patterns and differentiating inflammatory ILDs from viral pneumonias. This approach could enhance diagnostic precision and provide quantitative support for personalized ILD management.

Filippi L, Bianconi F, Ferrari C, Linguanti F, Battisti C, Urbano N, Minestrini M, Messina SG, Buci L, Baldoncini A, Rubini G, Schillaci O, Palumbo B

pubmed logopapersAug 22 2025
To compare PET-derived metrics between digital and analogue PET/CT in hyperparathyroidism, and to assess whether machine learning (ML) applied to quantitative PET parameters can distinguish parathyroid adenoma (PA) from hyperplasia (PH). From an initial multi-centre cohort of 179 patients, 86 were included, comprising 89 PET-positive lesions confirmed histologically (74 PA, 15 PH). Quantitative PET parameters-maximum standardised uptake value (SUVmax), metabolic tumour volume (MTV), target-to-background ratio (TBR), and maximum diameter-along with serum PTH and calcium levels, were compared between digital and analogue PET scanners using the Mann-Whitney U test. Receiver operating characteristic (ROC) analysis identified optimal threshold values. ML models (LASSO, decision tree, Gaussian naïve Bayes) were trained on harmonised quantitative features to distinguish PA from PH. Digital PET detected significantly smaller lesions than analogue PET, in both metabolic volume (1.32 ± 1.39 vs. 2.36 ± 2.01 cc; p < 0.001) and maximum diameter (8.35 ± 4.32 vs. 11.87 ± 5.29 mm; p < 0.001). PA lesions showed significantly higher SUVmax and TBR compared to PH (SUVmax: 8.58 ± 3.70 vs. 5.27 ± 2.34; TBR: 14.67 ± 6.99 vs. 8.82 ± 5.90; both p < 0.001). The optimal thresholds for identifying PA were SUVmax > 5.89 and TBR > 11.5. The best ML model (LASSO) achieved an AUC of 0.811, with 79.7% accuracy and balanced sensitivity and specificity. Digital PET outperforms analogue system in detecting small parathyroid lesions. Additionally, ML analysis of PET-derived metrics and PTH may support non-invasive distinction between adenoma and hyperplasia.

Klambauer K, Burger SD, Demmert TT, Mergen V, Moser LJ, Gulsun MA, Schöbinger M, Schwemmer C, Wels M, Allmendinger T, Eberhard M, Alkadhi H, Schmidt B

pubmed logopapersAug 22 2025
The aim of this study was to evaluate the feasibility and reproducibility of a novel deep learning (DL)-based coronary plaque quantification tool with automatic case preparation in patients undergoing ultra-high resolution (UHR) photon-counting detector CT coronary angiography (CCTA), and to assess the influence of temporal resolution on plaque quantification. In this retrospective single-center study, 45 patients undergoing clinically indicated UHR CCTA were included. In each scan, 2 image data sets were reconstructed: one in the dual-source mode with 66 ms temporal resolution and one simulating a single-source mode with 125 ms temporal resolution. A novel, DL-based algorithm for fully automated coronary segmentation and intensity-based plaque quantification was applied to both data sets in each patient. Plaque volume quantification was performed at the vessel-level for the entire left anterior descending artery (LAD), left circumflex artery (CX), and right coronary artery (RCA), as well as at the lesion-level for the largest coronary plaque in each vessel. Diameter stenosis grade was quantified for the coronary lesion with the greatest longitudinal extent in each vessel. To assess reproducibility, the algorithm was rerun 3 times in 10 randomly selected patients, and all outputs were visually reviewed and confirmed by an expert reader. Paired Wilcoxon signed-rank tests with Benjamini-Hochberg correction were used for statistical comparisons. One hundred nineteen out of 135 (88.1%) coronary arteries showed atherosclerotic plaques and were included in the analysis. In the reproducibility analysis, repeated runs of the algorithm yielded identical results across all plaque and lumen measurements (P > 0.999). All outputs were confirmed to be anatomically correct, visually consistent, and did not require manual correction. At the vessel level, total plaque volumes were higher in the 125 ms reconstructions compared with the 66 ms reconstructions in 28 of 45 patients (62%), with both calcified and noncalcified plaque volumes being higher in 32 (71%) and 28 (62%) patients, respectively. Total plaque volumes in the LAD, CX, and RCA were significantly higher in the 125 ms reconstructions (681.3 vs. 647.8  mm3, P < 0.05). At the lesion level, total plaque volumes were higher in the 125 ms reconstructions in 44 of 45 patients (98%; 447.3 vs. 414.9  mm3, P < 0.001), with both calcified and noncalcified plaque volumes being higher in 42 of 45 patients (93%). The median diameter stenosis grades for all vessels were significantly higher in the 125 ms reconstructions (35.4% vs. 28.1%, P < 0.01). This study evaluated a novel DL-based tool with automatic case preparation for quantitative coronary plaque in UHR CCTA data sets. The algorithm was technically robust and reproducible, delivering anatomically consistent outputs not requiring manual correction. Reconstructions with lower temporal resolution (125 ms) systematically overestimated plaque burden compared with higher temporal resolution (66 ms), underscoring that protocol standardization is essential for reliable DL-based plaque quantification.

Eichhorn H, Spieker V, Hammernik K, Saks E, Felsner L, Weiss K, Preibisch C, Schnabel JA

pubmed logopapersAug 22 2025
<math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msubsup><mrow><mi>T</mi></mrow> <mrow><mn>2</mn></mrow> <mrow><mo>∗</mo></mrow> </msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> quantification from gradient echo magnetic resonance imaging is particularly affected by subject motion due to its high sensitivity to magnetic field inhomogeneities, which are influenced by motion and might cause signal loss. Thus, motion correction is crucial to obtain high-quality <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msubsup><mrow><mi>T</mi></mrow> <mrow><mn>2</mn></mrow> <mrow><mo>∗</mo></mrow> </msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> maps. We extend PHIMO, our previously introduced learning-based physics-informed motion correction method for low-resolution <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msubsup><mrow><mi>T</mi></mrow> <mrow><mn>2</mn></mrow> <mrow><mo>∗</mo></mrow> </msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> mapping. Our extended version, PHIMO+, utilizes acquisition knowledge to enhance the reconstruction performance for challenging motion patterns and increase PHIMO's robustness to varying strengths of magnetic field inhomogeneities across the brain. We perform comprehensive evaluations regarding motion detection accuracy and image quality for data with simulated and real motion. PHIMO+ outperforms the learning-based baseline methods both qualitatively and quantitatively with respect to line detection and image quality. Moreover, PHIMO+ performs on par with a conventional state-of-the-art motion correction method for <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msubsup><mrow><mi>T</mi></mrow> <mrow><mn>2</mn></mrow> <mrow><mo>∗</mo></mrow> </msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> quantification from gradient echo MRI, which relies on redundant data acquisition. PHIMO+'s competitive motion correction performance, combined with a reduction in acquisition time by over 40% compared to the state-of-the-art method, makes it a promising solution for motion-robust <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msubsup><mrow><mi>T</mi></mrow> <mrow><mn>2</mn></mrow> <mrow><mo>∗</mo></mrow> </msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> quantification in research settings and clinical routine.

Marquez N, Carpio EJ, Santiago MR, Calderon J, Orillaza-Chi R, Salanap SS, Stevens L

pubmed logopapersAug 22 2025
The Philippines' high tuberculosis (TB) burden calls for effective point-of-care screening. Systematic TB case finding using chest X-ray (CXR) with computer-aided detection powered by deep learning-based artificial intelligence (AI-CAD) provided this opportunity. We aimed to comprehensively review AI-CAD's real-life performance in the local context to support refining its integration into the country's programmatic TB elimination efforts. Retrospective cross-sectional data analysis was done on case-finding activities conducted in four regions of the Philippines between May 2021 and March 2024. Individuals 15 years and older with complete CXR and molecular World Health Organization-recommended rapid diagnostic (mWRD) test results were included. TB presumptive was detected either by CXR or TB signs and symptoms and/or official radiologist readings. The overall diagnostic accuracy of CXR with AI-CAD, stratified by different factors, was assessed using a fixed abnormality threshold and mWRD as the standard reference. Given the imbalanced dataset, we evaluated both precision-recall (PRC) and receiver operating characteristic (ROC) plots. Due to limited verification of CAD-negative individuals, we used "pseudo-sensitivity" and "pseudo-specificity" to reflect estimates based on partial testing. We identified potential factors that may affect performance metrics. Using a 0.5 abnormality threshold in analyzing 5740 individuals, the AI-CAD model showed high pseudo-sensitivity at 95.6% (95% CI, 95.1-96.1) but low pseudo-specificity at 28.1% (26.9-29.2) and positive predictive value (PPV) at 18.4% (16.4-20.4). The area under the operating characteristic curve was 0.820, whereas the area under the precision-recall curve was 0.489. Pseudo-sensitivity was higher among males, younger individuals, and newly diagnosed TB. Threshold analysis revealed trade-offs, as increasing the threshold score to 0.68 saved more mWRD tests (42%) but led to an increase in missed cases (10%). Threshold adjustments affected PPV, tests saved, and case detection differently across settings. Scaling up AI-CAD use in TB screening to improve TB elimination efforts could be beneficial. There is a need to calibrate threshold scores based on resource availability, prevalence, and program goals. ROC and PRC plots, which specify PPV, could serve as valuable metrics for capturing the best estimate of model performance and cost-benefit ratios within the context-specific implementation of resource-limited settings.

Ruthberg J, Gunderson N, Chen P, Harris G, Case H, Bly R, Seibel EJ, Abuzeid WM

pubmed logopapersAug 22 2025
Residual disease after endoscopic sinus surgery (ESS) contributes to poor outcomes and revision surgery. Image-guided surgery systems cannot dynamically reflect intraoperative changes. We propose a sensorless, video-based method for intraoperative CT updating using neural radiance fields (NeRF), a deep learning algorithm used to create 3D surgical field reconstructions. Bilateral ESS was performed on three 3D-printed models (n = 6 sides). Postoperative endoscopic videos were processed through a custom NeRF pipeline to generate 3D reconstructions, which were co-registered to preoperative CT scans. Digitally updated CT models were created through algorithmic subtraction of resected regions, then volumetrically segmented, and compared to ground-truth postoperative CT. Accuracy was assessed using Hausdorff distance (surface alignment), Dice similarity coefficient (DSC) (volumetric overlap), and Bland‒Altman analysis (BAA) (statistical agreement). Comparison of the updated CT and the ground-truth postoperative CT indicated an average Hausdorff distance of 0.27 ± 0.076 mm and a 95th percentile Hausdorff distance of 0.82 ± 0.165 mm, indicating sub-millimeter surface alignment. The DSC was 0.93 ± 0.012 with values >0.9 suggestive of excellent spatial overlap. BAA indicated modest underestimation of volume on the updated CT versus ground-truth CT with a mean difference in volumes of 0.40 cm<sup>3</sup> with 95% limits of agreement of 0.04‒0.76 cm<sup>3</sup> indicating that all samples fell within acceptable bounds of variability. Computer vision can enable dynamic intraoperative imaging by generating highly accurate CT updates from monocular endoscopic video without external tracking. By directly visualizing resection progress, this software-driven tool has the potential to enhance surgical completeness in ESS for next-generation navigation platforms.

Lu Z, Zhu B, Ling H, Chen X

pubmed logopapersAug 22 2025
To develop a deep learning-based MRI model for predicting tongue cancer T-stage. This retrospective study analyzed clinical and MRI data from 579 tongue cancer patients (Xiangya Cancer Hospital and Jiangsu Province Hospital). T2-weighted (T2WI) and contrast-enhanced T1-weighted (CET1) sequences were preprocessed (anonymization/resampling/calibration). Regions of interest (ROI) were segmented by two radiologists (intraclass correlation coefficient (ICC) > 0.75), and using PyRadiomics, 2375 radiomics features were extracted. ResNet18 and ResNet50 algorithms were employed to build deep learning models (deep learning radiomics (DLR) resnet18 / DLRresnet50), compared with a radiomics model (Rad) based on 17 optimized features. Performance was evaluated via AUC, DCA, IDI, and NRI in different sets. In training set, deep learning models outperformed Rad (AUC: DLRresnet18 = 0.837, DLRresnet50 = 0.847 vs. Rad = 0.828). Test set and and external validation set results were consistent (DLRresnet18, AUC = 0.805 / 0.857; DLRresnet50, AUC = 0.810 / 0.860). The decision curve analysis (DCA) demonstrated that both deep learning models performed better than the Rad model in the training set, test set, and external validation set. Furthermore, both NRI and IDI of the two deep learning models compared with the Rad model were greater than 0. DLRresnet18 and DLRresnet50 models significantly improve T-stage prediction accuracy over traditional radiomics, reducing subjective interpretation errors and supporting personalized treatment planning. This research achievement provides new ideas and tools for image-assisted diagnosis of tongue cancer T-stage. III.

Bala K, Kumar KA, Venu D, Dudi BP, Veluri SP, Nirmala V

pubmed logopapersAug 22 2025
The COVID-19 pandemic emphasised necessity for prompt, precise diagnostics, secure data storage, and robust privacy protection in healthcare. Existing diagnostic systems often suffer from limited transparency, inadequate performance, and challenges in ensuring data security and privacy. The research proposes a novel privacy-preserving diagnostic framework, Heterogeneous Convolutional-recurrent attention Transfer learning based ResNeXt with Modified Greater Cane Rat optimisation (HCTR-MGR), that integrates deep learning, Explainable Artificial Intelligence (XAI), and blockchain technology. The HCTR model combines convolutional layers for spatial feature extraction, recurrent layers for capturing spatial dependencies, and attention mechanisms to highlight diagnostically significant regions. A ResNeXt-based transfer learning backbone enhances performance, while the MGR algorithm improves robustness and convergence. A trust-based permissioned blockchain stores encrypted patient metadata to ensure data security and integrity and eliminates centralised vulnerabilities. The framework also incorporates SHAP and LIME for interpretable predictions. Experimental evaluation on two benchmark chest X-ray datasets demonstrates superior diagnostic performance, achieving 98-99% accuracy, 97-98% precision, 95-97% recall, 99% specificity, and 95-98% F1-score, offering a 2-6% improvement over conventional models such as ResNet, SARS-Net, and PneuNet. These results underscore the framework's potential for scalable, secure, and clinically trustworthy deployment in real-world healthcare systems.

Vo HP, Williams T, Doroud K, Williams C, Rafecas M

pubmed logopapersAug 22 2025
The ProVision scanner is a dedicated prostate PET system with limited angular coverage; it employs a new detector technology that provides high spatial resolution as well as information about depth-of-interaction (DOI) and time-of-flight (TOF). The goal of this work is to develop a flexible image reconstruction framework and study the image performance of the current ProVision scanners.&#xD;Approach: Experimental datasets, including point-like sources, an image quality phantom, and a pelvic phantom, were acquired using the ProVision scanner to investigate the impact of oblique lines of response introduced via a multi-offset scanning protocol. This approach aims to mitigate data truncation artifacts and further characterise the current imaging performance of the system. For image reconstruction, we applied the list-mode Maximum Likelihood Expectation Maximisation algorithm incorporating TOF information. The system matrix and sensitivity models account for both detector attenuation and position uncertainty.&#xD;Main Results: The scanner provides good spatial resolution on the coronal plane; however, elongations caused by the limited angular coverage distort the reconstructed images. The availability of TOF and DOI information, as well as the addition of a multi-offset scanning protocol, could not fully compensate for these distortions.&#xD;Significance: The ProVision scanner concept, with innovative detector technology, shows promising outcomes for fast and inexpensive PET without CT. Despite current limitations due to limited angular coverage, which leads to image distortions, ongoing advancements, such as improved timing resolution, regularisation techniques, and artificial intelligence, are expected to significantly reduce these artifacts and enhance image quality.
Page 237 of 6576562 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.