Sort by:
Page 16 of 2982974 results

Kidney volume after endovascular exclusion of abdominal aortic aneurysms by EVAR and FEVAR.

B S, C V, Turkia J B, Weydevelt E V, R P, F L, A K

pubmed logopapersAug 9 2025
Decreased kidney volume is a sign of renal aging and/or decreased vascularization. The aim of this study was to determine whether renal volume changes 24 months after exclusion of an abdominal aortic aneurysm (AAA), and to compare fenestrated (FEVAR) and subrenal (EVAR) stents. Retrospective single-center study from a prospective registry, including patients between 60 and 80 years with normal preoperative renal function (eGFR≥60 ml/min/1.73 m<sup>-2</sup>) who underwent fenestrated (FEVAR) or infrarenal (EVAR) stent grafts between 2015 and 2021. Patients had to have had an CT scan at 24 months of the study to be included. Exclusion criteria were renal branches, the presence of preoperative renal insufficiency, a single kidney, embolization or coverage of an accessory renal artery, occlusion of a renal artery during follow-up and mention of AAA rupture. Renal volume was measured using sizing software (EndoSize, therenva) based on fully automatic deep-learning segmentation of several anatomical structures (arterial lumen, bone structure, thrombus, heart, etc.), including the kidneys. In the presence of renal cysts, these were manually excluded from the segmentation. Forty-eight patients were included (24 EVAR vs. 24 FEVAR), 96 kidneys were segmented. There was no difference between groups in age (78.9±6.7 years vs. 69.4±6.8, p=0.89), eGFR 85.8 ± 12.4 [62-107] ml/min/1.73 m<sup>-2</sup> vs. 81 ± 16.2 [42-107] (p=0.36), and renal volume 170.9 ± 29.7 [123-276] mL vs. 165.3 ± 37.4 [115-298] (p=0.12). At 24 months in the EVAR group, there was a non-significant reduction in eGFR 84.1 ± 17.2 [61-128] ml/min/1.73 m<sup>-2</sup> vs. 81 ± 16.2 [42-107] (p=0.36) or renal volume 170.9 ± 29.7 [123-276] mL vs. 165.3 ± 37.4 [115-298] (p=0.12). In the FEVAR group, at 24 months there was a non-significant fall in eGFR 84.1 ± 17.2 [61-128] ml/min/1.73 m<sup>-2</sup> vs. 73.8 ± 21.4 [40-110] (p=0.09), while renal volume decreased significantly 182 ± 37.8 [123-293] mL vs. 158.9 ± 40.2 [45-258] (p=0.007). In this study, there appears to be a significant decrease in renal volume without a drop in eGFR 24 months after fenestrated stenting. This decrease may reflect changes in renal perfusion and could potentially be predictive of long-term renal impairment, although this cannot be confirmed within the limits of this small sample. Further studies with long-term follow-up are needed.

Quantitative radiomic analysis of computed tomography scans using machine and deep learning techniques accurately predicts histological subtypes of non-small cell lung cancer: A retrospective analysis.

Panchawagh S, Halder A, Haldule S, Sanker V, Lalwani D, Sequeria R, Naik H, Desai A

pubmed logopapersAug 9 2025
Non-small cell lung cancer (NSCLC) histological subtypes impact treatment decisions. While pre-surgical histopathological examination is ideal, it's not always possible. CT radiomic analysis shows promise in predicting NSCLC histological subtypes. To predict NSCLC histological subtypes using machine learning and deep learning models using Radiomic features. 422 lung CT scans from The Cancer Imaging Archive (TCIA) were analyzed. Primary neoplasms were segmented by expert radiologists. Using PyRadiomics, 2446 radiomic features were extracted; post-selection, 179 features remained. Machine learning models like logistic regression (LR), Support vector machine (SVM), Random Forest (RF), XGBoost, LightGBM, and CatBoost were employed, alongside a deep neural network (DNN) model. RF demonstrated the highest accuracy at 78 % (95 % CI: 70 %-84 %) and AUC-ROC at 94 % (95 % CI: 90 %-96 %). LightGBM, XGBoost, and CatBoost had AUC-ROC values of 95 %, 93 %, and 93 % respectively. The DNN's AUC was 94.4 % (95 % CI: 94.1 %-94.6 %). Logistic regression had the least efficacy. For histological subtype prediction, random forest, boosting models, and DNN were superior. Quantitative radiomic analysis with machine learning can accurately determine NSCLC histological subtypes. Random forest, ensemble models, and DNNs show significant promise for pre-operative NSCLC classification, which can streamline therapy decisions.

SamRobNODDI: q-space sampling-augmented continuous representation learning for robust and generalized NODDI.

Xiao T, Cheng J, Fan W, Dong E, Wang S

pubmed logopapersAug 8 2025
Neurite Orientation Dispersion and Density Imaging (NODDI) microstructure estimation from diffusion magnetic resonance imaging (dMRI) is of great significance for the discovery and treatment of various neurological diseases. Current deep learning-based methods accelerate the speed of NODDI parameter estimation and improve the accuracy. However, most methods require the number and coordinates of gradient directions during testing and training to remain strictly consistent, significantly limiting the generalization and robustness of these models in NODDI parameter estimation. Therefore, it is imperative to develop methods that can perform robustly under varying diffusion gradient directions. In this paper, we propose a q-space sampling augmentation-based continuous representation learning framework (SamRobNODDI) to achieve robust and generalized NODDI. Specifically, a continuous representation learning method based on q-space sampling augmentation is introduced to fully explore the information between different gradient directions in q- space. Furthermore, we design a sampling consistency loss to constrain the outputs of different sampling schemes, ensuring that the outputs remain as consistent as possible, thereby further enhancing performance and robustness to varying q-space sampling schemes. SamRobNODDI is also a flexible framework that can be applied to different backbone networks. SamRobNODDI was compared against seven state-of-the-art methods across 18 diverse q-space sampling schemes. Extensive experimental validations have been conducted under both identical and diverse sampling schemes for training and testing, as well as across varying sampling rates, different loss functions, and multiple network backbones. Results demonstrate that the proposed SamRobNODDI has better performance, robustness, generalization, and flexibility in the face of varying q-space sampling schemes.&#xD.

GPT-4 vs. Radiologists: who advances mediastinal tumor classification better across report quality levels? A cohort study.

Wen R, Li X, Chen K, Sun M, Zhu C, Xu P, Chen F, Ji C, Mi P, Li X, Deng X, Yang Q, Song W, Shang Y, Huang S, Zhou M, Wang J, Zhou C, Chen W, Liu C

pubmed logopapersAug 8 2025
Accurate mediastinal tumor classification is crucial for treatment planning, but diagnostic performance varies with radiologists' experience and report quality. To evaluate GPT-4's diagnostic accuracy in classifying mediastinal tumors from radiological reports compared to radiologists of different experience levels using radiological reports of varying quality. We conducted a retrospective study of 1,494 patients from five tertiary hospitals with mediastinal tumors diagnosed via chest CT and pathology. Radiological reports were categorized into low-, medium-, and high-quality based on predefined criteria assessed by experienced radiologists. Six radiologists (two residents, two attending radiologists, and two associate senior radiologists) and GPT-4 evaluated the chest CT reports. Diagnostic performance was analyzed overall, by report quality, and by tumor type using Wald χ2 tests and 95% CIs calculated via the Wilson method. GPT-4 achieved an overall diagnostic accuracy of 73.3% (95% CI: 71.0-75.5), comparable to associate senior radiologists (74.3%, 95% CI: 72.0-76.5; p >0.05). For low-quality reports, GPT-4 outperformed associate senior radiologists (60.8% vs. 51.1%, p<0.001). In high-quality reports, GPT-4 was comparable to attending radiologists (80.6% vs.79.4%, p>0.05). Diagnostic performance varied by tumor type: GPT-4 was comparable to radiology residents for neurogenic tumors (44.9% vs. 50.3%, p>0.05), similar to associate senior radiologists for teratomas (68.1% vs. 65.9%, p>0.05), and superior in diagnosing lymphoma (75.4% vs. 60.4%, p<0.001). GPT-4 demonstrated interpretation accuracy comparable to Associate Senior Radiologists, excelling in low-quality reports and outperforming them in diagnosing lymphoma. These findings underscore GPT-4's potential to enhance diagnostic performance in challenging diagnostic scenarios.

Synthesized myelin and iron stainings from 7T multi-contrast MRI via deep learning.

Pittayapong S, Hametner S, Bachrata B, Endmayr V, Bogner W, Höftberger R, Grabner G

pubmed logopapersAug 8 2025
Iron and myelin are key biomarkers for studying neurodegenerative and demyelinating brain diseases. Multi-contrast MRI techniques, such as R2* and QSM, are commonly used for iron assessment, with histology as the reference standard, but non-invasive myelin assessment remains challenging. To address this, we developed a deep learning model to generate iron and myelin staining images from in vivo multi-contrast MRI data, with a resolution comparable to ex vivo histology macro-scans. A cadaver head was scanned using a 7T MR scanner to acquire T1-weighted and multi-echo GRE data for R2*, and QSM processing, followed by histological staining for myelin and iron. To evaluate the generalizability of the model, a second cadaver head and two in vivo MRI datasets were included. After MRI-to-histology registration in the training subject, a self-attention generative adversarial network (GAN) was trained to synthesize myelin and iron staining images using various combinations of MRI contrast. The model achieved optimal myelin prediction when combining T1w, R2*, and QSM images. Incorporating the synthesized myelin images improved the subsequent prediction of iron staining. The generated images displayed fine details similar to those in histology data and demonstrated generalizability across healthy control subjects. Synthesized myelin images clearly differentiated myelin concentration between white and gray matter, while synthesized iron staining presented distinct patterns such as particularly high deposition in deep gray matter. This study shows that deep learning can transform MRI data into histological feature images, offering ex vivo insights from in vivo data and contributing to advancements in brain histology research.

Subject-specific acceleration of simultaneous quantification of blood flow and T<sub>1</sub> of the brain using a dual-flip-angle phase-contrast stack-of-stars sequence.

Wang Y, Wang M, Liu B, Ding Z, She H, Du YP

pubmed logopapersAug 8 2025
To develop a highly accelerated MRI technique for simultaneous quantification of blood flow and T<sub>1</sub> of the brain tissue. A dual-flip-angle phase-contrast stack-of-stars (DFA PC-SOS) sequence was developed for simultaneous acquisition of highly-undersampled data for the quantification of velocity of arterial blood and T<sub>1</sub> mapping of brain tissue. A deep learning-based algorithm, combining hybrid-feature hash encoding implicit neural representation with explicit sparse prior knowledge (INRESP), was used for image reconstruction. Magnitude and phase images were used for T<sub>1</sub> mapping and velocity measurements, respectively. The accuracy of the measurements was assessed in a quantitative phantom and six healthy volunteers. T<sub>1</sub> mapping obtained with DFA PC-SOS showed high correlation and consistency with reference measurements in phantom experiments (y = 0.916× + 4.71, R<sup>2</sup> = 0.9953, ICC = 0.9963). Blood flow measurements in healthy volunteers demonstrated strong correlation and consistency with reference values measured by SFA PC-SOS (y = 1.04×-0.187, R<sup>2</sup> = 0.9918, ICC = 0.9967). The proposed technique enabled an acceleration of 16× with high correlation and consistency with fully sampled data in volunteers (T<sub>1</sub>: y = 1.06× + 1.44, R<sup>2</sup> = 0.9815, ICC = 0.9818; flow: y = 1.01×-0.0525, R<sup>2</sup> = 0.9995, ICC = 0.9998). This study demonstrates the feasibility of 16-fold accelerated simultaneous acquisition for flow quantification and T<sub>1</sub> mapping in the brain. The proposed technique provides a rapid and comprehensive assessment of cerebrovascular diseases with both vascular hemodynamics and surrounding brain tissue characteristics, and has potential to be used in routine clinical applications.

Application of Artificial Intelligence in Bone Quality and Quantity Assessment for Dental Implant Planning: A Scoping Review.

Qiu S, Yu X, Wu Y

pubmed logopapersAug 8 2025
To assess how artificial intelligence (AI) models perform in evaluating bone quality and quantity in the preoperative planning process for dental implants. This review included studies that utilized AI-based assessments of bone quality and/or quantity based on radiographic images in the preoperative phase. Studies published in English before April 2025 were used in this review, which were obtained from searches in PubMed/MEDLINE, Embase, Web of Science, Scopus, and the Cochrane Library, as well as from manual searches. Eleven studies met the inclusion criteria. Five studies focused on bone quality evaluation and six studies included volumetric assessments using AI models. The performance measures included accuracy, sensitivity, specificity, precision, F1 score, and Dice coefficient, and were compared with human expert evaluations. AI models demonstrated high accuracy (76.2%-99.84%), high sensitivity (78.9%-100%), and high specificity (66.2%-99%). AI models have potential for the evaluation of bone quality and quantity, although standardization and external validation studies are lacking. Future studies should propose multicenter datasets, integration into clinical workflows, and the development of refined models to better reflect real-life conditions. AI has the potential to offer clinicians with reliable automated evaluations of bone quality and quantity, with the promise of a fully automated system of implant planning. It may also support preoperative workflows for clinical decision-making based on evidence more efficiently.

Automated coronary artery segmentation / tissue characterization and detection of lipid-rich plaque: An integrated backscatter intravascular ultrasound study.

Masuda Y, Takeshita R, Tsujimoto A, Sahashi Y, Watanabe T, Fukuoka D, Hara T, Kanamori H, Okura H

pubmed logopapersAug 8 2025
Intravascular ultrasound (IVUS)-based tissue characterization has been used to detect vulnerable plaque or lipid-rich plaque (LRP). Recently, advancements in artificial intelligence (AI) technology have enabled automated coronary arterial plaque segmentation and tissue characterization. The purpose of this study was to evaluate the feasibility and diagnostic accuracy of a deep learning model for plaque segmentation, tissue characterization and identification of LRP. A total of 1,098 IVUS images from 67 patients who underwent IVUS-guided percutaneous coronary intervention were selected for the training group, while 1,100 IVUS images from 100 vessels (88 patients) were used for the validation group. A 7-layer U-Net ++ was applied for automated coronary artery segmentation and tissue characterization. Segmentation and quantification of the external elastic membrane (EEM), lumen and guidewire artifact were performed and compared with manual measurements. Plaque tissue characterization was conducted using integrated backscatter (IB)-IVUS as the gold standard. LRP was defined as %lipid area of ≥65 %. The deep learning model accurately segmented EEM and lumen. AI-predicted %lipid area (R = 0.90, P < 0.001), % fibrosis area (R = 0.89, P < 0.001), %dense fibrosis area (R = 0.81, P < 0.001) and % calcification area (R = 0.89, P < 0.001), showed strong correlation with IB-IVUS measurements. The model predicted LRP with a sensitivity of 62 %, specificity of 94 %, positive predictive value of 69 %, negative predictive value of 92 % and an area under the receiver operating characteristic curve of 0.919 (95 % CI:0.902-0.934), respectively. The deep-learning model demonstrated accurate automatic segmentation and tissue characterization of human coronary arteries, showing promise for identifying LRP.

Deep Learning Chest X-Ray Age, Epigenetic Aging Clocks and Associations with Age-Related Subclinical Disease in the Project Baseline Health Study.

Chandra J, Short S, Rodriguez F, Maron DJ, Pagidipati N, Hernandez AF, Mahaffey KW, Shah SH, Kiel DP, Lu MT, Raghu VK

pubmed logopapersAug 8 2025
Chronological age is an important component of medical risk scores and decision-making. However, there is considerable variability in how individuals age. We recently published an open-source deep learning model to assess biological age from chest radiographs (CXR-Age), which predicts all-cause and cardiovascular mortality better than chronological age. Here, we compare CXR-Age to two established epigenetic aging clocks (First generation-Horvath Age; Second generation-DNAm PhenoAge) to test which is more strongly associated with cardiopulmonary disease and frailty. Our cohort consisted of 2,097 participants from the Project Baseline Health Study, a prospective cohort study of individuals from four US sites. We compared the association between the different aging clocks and measures of cardiopulmonary disease, frailty, and protein abundance collected at the participant's first annual visit using linear regression models adjusted for common confounders. We found that CXR-Age was associated with coronary calcium, cardiovascular risk factors, worsening pulmonary function, increased frailty, and abundance in plasma of two proteins implicated in neuroinflammation and aging. Associations with DNAm PhenoAge were weaker for pulmonary function and all metrics in middle-age adults. We identified thirteen proteins that were associated with DNAm PhenoAge, one (CDH13) of which was also associated with CXR-Age. No associations were found with Horvath Age. These results suggest that CXR-Age may serve as a better metric of cardiopulmonary aging than epigenetic aging clocks, especially in midlife adults.

An Anisotropic Cross-View Texture Transfer with Multi-Reference Non-Local Attention for CT Slice Interpolation.

Uhm KH, Cho H, Hong SH, Jung SW

pubmed logopapersAug 8 2025
Computed tomography (CT) is one of the most widely used non-invasive imaging modalities for medical diagnosis. In clinical practice, CT images are usually acquired with large slice thicknesses due to the high cost of memory storage and operation time, resulting in an anisotropic CT volume with much lower inter-slice resolution than in-plane resolution. Since such inconsistent resolution may lead to difficulties in disease diagnosis, deep learning-based volumetric super-resolution methods have been developed to improve inter-slice resolution. Most existing methods conduct single-image super-resolution on the through-plane or synthesize intermediate slices from adjacent slices; however, the anisotropic characteristic of 3D CT volume has not been well explored. In this paper, we propose a novel cross-view texture transfer approach for CT slice interpolation by fully utilizing the anisotropic nature of 3D CT volume. Specifically, we design a unique framework that takes high-resolution in-plane texture details as a reference and transfers them to low-resolution through-plane images. To this end, we introduce a multi-reference non-local attention module that extracts meaningful features for reconstructing through-plane high-frequency details from multiple in-plane images. Through extensive experiments, we demonstrate that our method performs significantly better in CT slice interpolation than existing competing methods on public CT datasets including a real-paired benchmark, verifying the effectiveness of the proposed framework. The source code of this work is available at https://github.com/khuhm/ACVTT.
Page 16 of 2982974 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.