Sort by:
Page 41 of 3433423 results

Pulmonary Biomechanics in COPD: Imaging Techniques and Clinical Applications.

Aguilera SM, Chaudhary MFA, Gerard SE, Reinhardt JM, Bodduluri S

pubmed logopapersSep 1 2025
The respiratory system depends on complex biomechanical processes to enable gas exchange. The mechanical properties of the lung parenchyma, airways, vasculature, and surrounding structures play an essential role in overall ventilation efficacy. These complex biomechanical processes however are significantly altered in chronic obstructive pulmonary disease (COPD) due to emphysematous destruction of lung parenchyma, chronic airway inflammation, and small airway obstruction. Recent advancements computed tomography (CT) and magnetic resonance imaging (MRI) acquisition techniques, combined with sophisticated image post-processing algorithms and deep neural network integration, have enabled comprehensive quantitative assessment of lung structure, tissue deformation, and lung function at the tissue level. These methods have led to better phenotyping, therapeutic strategies and refined our understanding of pathological processes that compromise pulmonary function in COPD. In this review, we discuss recent developments in imaging and image processing methods for studying pulmonary biomechanics with specific focus on clinical applications for chronic obstructive pulmonary disease (COPD) including the assessment of regional ventilation, planning of endobronchial valve treatment, prediction of disease onset and progression, sizing of lungs for transplantation, and guiding mechanical ventilation. These advanced image-based biomechanical measurements when combined with clinical expertise play a critical role in disease management and personalized therapeutic interventions for patients with COPD.

Comparison of the diagnostic performance of the artificial intelligence-based TIRADS algorithm with established classification systems for thyroid nodules.

Bozkuş A, Başar Y, Güven K

pubmed logopapersSep 1 2025
This study aimed to evaluate and compare the diagnostic performance of various Thyroid Imaging Reporting and Data Systems (TIRADS), with a particular focus on the artificial intelligence-based TIRADS (AI-TIRADS), in characterizing thyroid nodules. In this retrospective study conducted between April 2016 and May 2022, 1,322 thyroid nodules from 1,139 patients with confirmed cytopathological diagnoses were included. Each nodule was assessed using TIRADS classifications defined by the American College of Radiology (ACR-TIRADS), the American Thyroid Association (ATA-TIRADS), the European Thyroid Association (EU-TIRADS), the Korean Thyroid Association (K-TIRADS), and the AI-TIRADS. Three radiologists independently evaluated the ultrasound (US) characteristics of the nodules using all classification systems. Diagnostic performance was assessed using sensitivity, specificity, positive predictive value (PPV), and negative predictive value, and comparisons were made using the McNemar test. Among the nodules, 846 (64%) were benign, 299 (22.6%) were of intermediate risk, and 147 (11.1%) were malignant. The AI-TIRADS demonstrated a PPV of 21.2% and a specificity of 53.6%, outperforming the other systems in specificity without compromising sensitivity. The specificities of the ACR-TIRADS, the ATA-TIRADS, the EU-TIRADS, and the K-TIRADS were 44.6%, 39.3%, 40.1%, and 40.1%, respectively (all pairwise comparisons with the AI-TIRADS: <i>P</i> < 0.001). The PPVs for the ACR-TIRADS, the ATA-TIRADS, the EU-TIRADS, and the K-TIRADS were 18.5%, 17.9%, 17.9%, and 17.4%, respectively (all pairwise comparisons with the AI-TIRADS, excluding the ACR-TIRADS: <i>P</i> < 0.05). The AI-TIRADS shows promise in improving diagnostic specificity and reducing unnecessary biopsies in thyroid nodule assessment while maintaining high sensitivity. The findings suggest that the AI-TIRADS may enhance risk stratification, leading to better patient management. Additionally, the study found that the presence of multiple suspicious US features markedly increases the risk of malignancy, whereas isolated features do not substantially elevate the risk. The AI-TIRADS can enhance thyroid nodule risk stratification by improving diagnostic specificity and reducing unnecessary biopsies, potentially leading to more efficient patient management and better utilization of healthcare resources.

An innovative bimodal computed tomography data-driven deep learning model for predicting aortic dissection: a multi-center study.

Li Z, Chen L, Zhang S, Zhang X, Zhang J, Ying M, Zhu J, Li R, Song M, Feng Z, Zhang J, Liang W

pubmed logopapersSep 1 2025
Aortic dissection (AD) is a lethal emergency requiring prompt diagnosis. Current computed tomography angiography (CTA)-based diagnosis requires contrast agents, which expends time, whereas existing deep learning (DL) models only support single-modality inputs [non-contrast computed tomography (CT) or CTA]. In this study, we propose a bimodal DL framework to independently process both types, enabling dual-path detection and improving diagnostic efficiency. Patients who underwent non-contrast CT and CTA from February 2016 to September 2021 were retrospectively included from three institutions, including the First Affiliated Hospital, Zhejiang University School of Medicine (Center I), Zhejiang Hospital (Center II), and Yiwu Central Hospital (Center III). A two-stage DL model for predicting AD was developed. The first stage used an aorta detection network (AoDN) to localize the aorta in non-contrast CT or CTA images. Image patches that contained detected aorta were cut from CT images and combined to form an image patch sequence, which was inputted to an aortic dissection diagnosis network (ADDiN) to diagnose AD in the second stage. The following performances were assessed: aorta detection and diagnosis using average precision at the intersection over union threshold 0.5 ([email protected]) and area under the receiver operating characteristic curve (AUC). The first cohort, comprising 102 patients (53±15 years, 80 men) from two institutions, was used for the AoDN, whereas the second cohort, consisting of 861 cases (55±15 years, 623 men) from three institutions, was used for the ADDiN. For the AD task, the AoDN achieved [email protected] 99.14% on the non-contrast CT test set and 99.34% on the CTA test set, respectively. For the AD diagnosis task, the ADDiN obtained an AUCs of 0.98 on the non-contrast CT test set and 0.99 on the CTA test set. The proposed bimodal CT data-driven DL model accurately diagnoses AD, facilitating prompt hospital diagnosis and treatment of AD.

Deep learning-based super-resolution method for projection image compression in radiotherapy.

Chang Z, Shang J, Fan Y, Huang P, Hu Z, Zhang K, Dai J, Yan H

pubmed logopapersSep 1 2025
Cone-beam computed tomography (CBCT) is a three-dimensional (3D) imaging method designed for routine target verification of cancer patients during radiotherapy. The images are reconstructed from a sequence of projection images obtained by the on-board imager attached to a radiotherapy machine. CBCT images are usually stored in a health information system, but the projection images are mostly abandoned due to their massive volume. To store them economically, in this study, a deep learning (DL)-based super-resolution (SR) method for compressing the projection images was investigated. In image compression, low-resolution (LR) images were down-sampled by a factor from the high-resolution (HR) projection images and then encoded to the video file. In image restoration, LR images were decoded from the video file and then up-sampled to HR projection images via the DL network. Three SR DL networks, convolutional neural network (CNN), residual network (ResNet), and generative adversarial network (GAN), were tested along with three video coding-decoding (CODEC) algorithms: Advanced Video Coding (AVC), High Efficiency Video Coding (HEVC), and AOMedia Video 1 (AV1). Based on the two databases of the natural and projection images, the performance of the SR networks and video codecs was evaluated with the compression ratio (CR), peak signal-to-noise ratio (PSNR), video quality metric (VQM), and structural similarity index measure (SSIM). The codec AV1 achieved the highest CR among the three codecs. The CRs of AV1 were 13.91, 42.08, 144.32, and 289.80 for the down-sampling factor (DSF) 0 (non-SR) 2, 4, and 6, respectively. The SR network, ResNet, achieved the best restoration accuracy among the three SR networks. Its PSNRs were 69.08, 41.60, 37.08, and 32.44 dB for the four DSFs, respectively; its VQMs were 0.06%, 3.65%, 6.95%, and 13.03% for the four DSFs, respectively; and its SSIMs were 0.9984, 0.9878, 0.9798, and 0.9518 for the four DSFs, respectively. As the DSF increased, the CR increased proportionally with the modest degradation of the restored images. The application of the SR model can further improve the CR based on the current result achieved by the video encoders. This compression method is not only effective for the two-dimensional (2D) projection images, but also applicable to the 3D images used in radiotherapy.

Enhancing diagnostic precision for thyroid C-TIRADS category 4 nodules: a hybrid deep learning and machine learning model integrating grayscale and elastographic ultrasound features.

Zou D, Lyu F, Pan Y, Fan X, Du J, Mai X

pubmed logopapersSep 1 2025
Accurate and timely diagnosis of thyroid cancer is critical for clinical care, and artificial intelligence can enhance this process. This study aims to develop and validate an intelligent assessment model called C-TNet, based on the Chinese Guidelines for Ultrasound Malignancy Risk Stratification of Thyroid Nodules (C-TIRADS) and real-time elasticity imaging. The goal is to differentiate between benign and malignant characteristics of thyroid nodules classified as C-TIRADS category 4. We evaluated the performance of C-TNet against ultrasonographers and BMNet, a model trained exclusively on histopathological findings indicating benign or malignant nature. The study included 3,545 patients with pathologically confirmed C-TIRADS category 4 thyroid nodules from two tertiary hospitals in China: Affiliated Hospital of Integrated Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine (n=3,463 patients) and Jiangyin People's Hospital (n=82 patients). The cohort from Affiliated Hospital of Integrated Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine was randomly divided into a training set and validation set (7:3 ratio), while the cohort from Jiangyin People's Hospital served as the external validation set. The C-TNet model was developed by extracting image features from the training set and integrating them with six commonly used classifier algorithms: logistic regression (LR), linear discriminant analysis (LDA), random forest (RF), kernel support vector machine (K-SVM), adaptive boosting (AdaBoost), and Naive Bayes (NB). Its performance was evaluated using both internal and external validation sets, with statistical differences analyzed through the Chi-squared test. C-TNet model effectively integrates feature extraction from deep neural networks with a RF classifier, utilizing grayscale and elastography ultrasound data. It successfully differentiates benign from malignant thyroid nodules, achieving an area under the curve (AUC) of 0.873, comparable to the performance of senior physicians (AUC: 0.868). The model demonstrates generalizability across diverse clinical settings, positioning itself as a transformative decision-support tool for enhancing the risk stratification of thyroid nodules.

Detection of Microscopic Glioblastoma Infiltration in Peritumoral Edema Using Interactive Deep Learning With DTI Biomarkers: Testing via Stereotactic Biopsy.

Tu J, Shen C, Liu J, Hu B, Chen Z, Yan Y, Li C, Xiong J, Daoud AM, Wang X, Li Y, Zhu F

pubmed logopapersSep 1 2025
Microscopic tumor cell infiltration beyond contrast-enhancing regions influences glioblastoma prognosis but remains undetectable using conventional MRI. To develop and evaluate the glioblastoma infiltrating area interactive detection framework (GIAIDF), an interactive deep-learning framework that integrates diffusion tensor imaging (DTI) biomarkers for identifying microscopic infiltration within peritumoral edema. Retrospective. A total of 73 training patients (51.13 ± 13.87 years; 47 M/26F) and 25 internal validation patients (52.82 ± 10.76 years; 14 M/11F) from Center 1; 25 external validation patients (47.29 ± 11.39 years; 16 M/9F) from Center 2; 13 prospective biopsy patients (45.62 ± 9.28 years; 8 M/5F) from Center 1. 3.0 T MRI including three-dimensional contrast-enhanced T1-weighted BRAVO sequence (repetition time = 7.8 ms, echo time = 3.0 ms, inversion time = 450 ms, slice thickness = 1 mm), three-dimensional T2-weighted fluid-attenuated inversion recovery (repetition time = 7000 ms, echo time = 120 ms, inversion time = 2000 ms, slice thickness = 1 mm), and diffusion tensor imaging (repetition time = 8500 ms, echo time = 63 ms, slice thickness = 2 mm). Histopathology of 25 stereotactic biopsy specimens served as the reference standard. Primary metrics included AUC, accuracy, sensitivity, and specificity. GIAIDF heatmaps were co-registered to biopsy trajectories using Ratio-FAcpcic (0.16-0.22) as interactive priors. ROC analysis (DeLong's method) for AUC; recall, precision, and F1 score for prediction validation. GIAIDF demonstrated recall = 0.800 ± 0.060, precision = 0.915 ± 0.057, F1 = 0.852 ± 0.044 in internal validation (n = 25) and recall = 0.778 ± 0.053, precision = 0.890 ± 0.051, F1 = 0.829 ± 0.040 in external validation (n = 25). Among 13 patients undergoing stereotactic biopsy, 25 peri-ED specimens were analyzed: 18 without tumor cell infiltration and seven with infiltration, achieving AUC = 0.929 (95% CI: 0.804-1.000), sensitivity = 0.714, specificity = 0.944, and accuracy = 0.880. Infiltrated sites showed significantly higher risk scores (0.549 ± 0.194 vs. 0.205 ± 0.175 in non-infiltrated sites, p < 0.001). This study has provided a potential tool, GIAIDF, to identify regions of GBM infiltration within areas of peri-ED based on preoperative MR images.

Challenges in diagnosis of sarcoidosis.

Bączek K, Piotrowski WJ, Bonella F

pubmed logopapersSep 1 2025
Diagnosing sarcoidosis remains challenging. Histology findings and a variable clinical presentation can mimic other infectious, malignant, and autoimmune diseases. This review synthesizes current evidence on histopathology, sampling techniques, imaging modalities, and biomarkers and explores how emerging 'omics' and artificial intelligence tools may sharpen diagnostic accuracy. Within the typical granulomatous lesions, limited or 'burned-out' necrosis is an ancillary finding, which can be present in up to one-third of sarcoid biopsies, and demands a careful differential diagnostic work-up. Endobronchial ultrasound-guided transbronchial needle aspiration of lymph nodes has replaced mediastinoscopy as first-line sampling tool, while cryobiopsy is still under validation. Volumetric PET metrics such as total lung glycolysis and somatostatin-receptor tracers refine activity assessment; combined FDG PET/MRI improves detection of occult cardiac disease. Advanced bronchoalveolar lavage (BAL) immunophenotyping via flow cytometry and serum, BAL, and genetic biomarkers show to correlate with inflammatory burden but have low diagnostic value. Multi-omics signatures and Positron Emission Tomography with Computer Tomography radiomics, supported by deep-learning algorithms, show promising results for noninvasive diagnostic confirmation, phenotyping, and disease monitoring. No single test is conclusive for diagnosing sarcoidosis. An integrated, multidisciplinary strategy is needed. Large, multicenter, and multiethnic studies are essential to translate and validate data from emerging AI tools and -omics research into clinical routine.

Predicting Postoperative Prognosis in Pediatric Malignant Tumor With MRI Radiomics and Deep Learning Models: A Retrospective Study.

Chen Y, Hu X, Fan T, Zhou Y, Yu C, Yu J, Zhou X, Wang B

pubmed logopapersSep 1 2025
The aim of this study is to develop a multimodal machine learning model that integrates magnetic resonance imaging (MRI) radiomics, deep learning features, and clinical indexes to predict the 3-year postoperative disease-free survival (DFS) in pediatric patients with malignant tumors. A cohort of 260 pediatric patients with brain tumors who underwent R0 resection (aged ≤ 14 y) was retrospectively included in the study. Preoperative T1-enhanced MRI images and clinical data were collected. Image preprocessing involved N4 bias field correction and Z-score standardization, with tumor areas manually delineated using 3D Slicer. A total of 1130 radiomics features (Pyradiomics) and 511 deep learning features (3D ResNet-18) were extracted. Six machine learning models (eg, SVM, RF, LightGBM) were developed after dimensionality reduction through Lasso regression analysis, based on selected clinical indexes such as tumor diameter, GCS score, and nutritional status. Bayesian optimization was applied to adjust model parameters. The evaluation metrics included AUC, sensitivity, and specificity. The fusion model (LightGBM) achieved an AUC of 0.859 and an accuracy of 85.2% in the validation set. When combined with clinical indexes, the final model's AUC improved to 0.909. Radiomics features, such as texture heterogeneity, and clinical indexes, including tumor diameter ≥ 5 cm and preoperative low albumin, significantly contributed to prognosis prediction. The multimodal model demonstrated effective prediction of the 3-year postoperative DFS in pediatric brain tumors, offering a scientific foundation for personalized treatment.

Added prognostic value of histogram features from preoperative multi-modal diffusion MRI in predicting Ki-67 proliferation for adult-type diffuse gliomas.

Huang Y, He S, Hu H, Ma H, Huang Z, Zeng S, Mazu L, Zhou W, Zhao C, Zhu N, Wu J, Liu Q, Yang Z, Wang W, Shen G, Zhang N, Chu J

pubmed logopapersSep 1 2025
Ki-67 labelling index (LI), a critical marker of tumor proliferation, is vital for grading adult-type diffuse gliomas and predicting patient survival. However, its accurate assessment currently relies on invasive biopsy or surgical resection. This makes it challenging to non-invasively predict Ki-67 LI and subsequent prognosis. Therefore, this study aimed to investigate whether histogram analysis of multi-parametric diffusion model metrics-specifically diffusion tensor imaging (DTI), diffusion kurtosis imaging (DKI), and neurite orientation dispersion and density imaging (NODDI)-could help predict Ki-67 LI in adult-type diffuse gliomas and further predict patient survival. A total of 123 patients with diffuse gliomas who underwent preoperative bipolar spin-echo diffusion magnetic resonance imaging (MRI) were included. Diffusion metrics (DTI, DKI and NODDI) and their histogram features were extracted and used to develop a nomogram model in the training set (n=86), and the performance was verified in the test set (n=37). Area under the receiver operating characteristics curve of the nomogram model was calculated. The outcome cohort, including 123 patients, was used to evaluate the predictive value of the diffusion nomogram model for overall survival (OS). Cox proportion regression was performed to predict OS. Among 123 patients, 87 exhibited high Ki-67 LI (Ki-67 LI >5%). The patients had a mean age of 46.08±13.24 years, with 39 being female. Tumor grading showed 46 cases of grade 2, 21 cases of grade 3, and 56 cases of grade 4. The nomogram model included eight histogram features from diffusion MRI and showed good performance for prediction Ki-67 LI, with area under the receiver operating characteristic curves (AUCs) of 0.92 [95% confidence interval (CI): 0.85-0.98, sensitivity =0.85, specificity =0.84] and 0.84 (95% CI: 0.64-0.98, sensitivity =0.77, specificity =0.73) in the training set and test set, respectively. Further nomogram incorporating these variables showed good discrimination in Ki-67 LI predicting and glioma grading. A low nomogram model score relative to the median value in the outcomes cohort was independently associated with OS (P<0.01). Accurate prediction of the Ki-67 LI in adult-type diffuse glioma patients was achieved by using multi-modal diffusion MRI histogram radiomics model, which also reliably and accurately determined survival. ClinicalTrials.gov Identifier: NCT06572592.

Automated coronary analysis in ultrahigh-spatial resolution photon-counting detector CT angiography: Clinical validation and intra-individual comparison with energy-integrating detector CT.

Kravchenko D, Hagar MT, Varga-Szemes A, Schoepf UJ, Schoebinger M, O'Doherty J, Gülsün MA, Laghi A, Laux GS, Vecsey-Nagy M, Emrich T, Tremamunno G

pubmed logopapersSep 1 2025
To evaluate a deep-learning algorithm for automated coronary artery analysis on ultrahigh-resolution photon-counting detector coronary computed tomography (CT) angiography and compared its performance to expert readers using invasive coronary angiography as reference. Thirty-two patients (mean age 68.6 years; 81 ​% male) underwent both energy-integrating detector and ultrahigh-resolution photon-counting detector CT within 30 days. Expert readers scored each image using the Coronary Artery Disease-Reporting and Data System classification, and compared to invasive angiography. After a three-month wash-out, one reader reanalyzed the photon-counting detector CT images assisted by the algorithm. Sensitivity, specificity, accuracy, inter-reader agreement, and reading times were recorded for each method. On 401 arterial segments, inter-reader agreement improved from substantial (κ ​= ​0.75) on energy-integrating detector CT to near-perfect (κ ​= ​0.86) on photon-counting detector CT. The algorithm alone achieved 85 ​% sensitivity, 91 ​% specificity, and 90 ​% accuracy on energy-integrating detector CT, and 85 ​%, 96 ​%, and 95 ​% on photon-counting detector CT. Compared to invasive angiography on photon-counting detector CT, manual and automated reads had similar sensitivity (67 ​%), but manual assessment slightly outperformed regarding specificity (85 ​% vs. 79 ​%) and accuracy (84 ​% vs. 78 ​%). When the reader was assisted by the algorithm, specificity rose to 97 ​% (p ​< ​0.001), accuracy to 95 ​%, and reading time decreased by 54 ​% (p ​< ​0.001). This deep-learning algorithm demonstrates high agreement with experts and improved diagnostic performance on photon-counting detector CT. Expert review augmented by the algorithm further increases specificity and dramatically reduces interpretation time.
Page 41 of 3433423 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.