Sort by:
Page 121 of 1411403 results

Deep learning reconstruction for improved image quality of ultra-high-resolution brain CT angiography: application in moyamoya disease.

Ma Y, Nakajima S, Fushimi Y, Funaki T, Otani S, Takiya M, Matsuda A, Kozawa S, Fukushima Y, Okuchi S, Sakata A, Yamamoto T, Sakamoto R, Chihara H, Mineharu Y, Arakawa Y, Nakamoto Y

pubmed logopapersMay 29 2025
To investigate vessel delineation and image quality of ultra-high-resolution (UHR) CT angiography (CTA) reconstructed using deep learning reconstruction (DLR) optimised for brain CTA (DLR-brain) in moyamoya disease (MMD), compared with DLR optimised for body CT (DLR-body) and hybrid iterative reconstruction (Hybrid-IR). This retrospective study included 50 patients with suspected or diagnosed MMD who underwent UHR brain CTA. All images were reconstructed using DLR-brain, DLR-body, and Hybrid-IR. Quantitative analysis focussed on moyamoya perforator vessels in the basal ganglia and periventricular anastomosis. For these small vessels, edge sharpness, peak CT number, vessel contrast, full width at half maximum (FWHM), and image noise were measured and compared. Qualitative analysis was performed by visual assessment to compare vessel delineation and image quality. DLR-brain significantly improved edge sharpness, peak CT number, vessel contrast, and FWHM, and significantly reduced image noise compared with DLR-body and Hybrid-IR (P < 0.05). DLR-brain significantly outperformed the other algorithms in the visual assessment (P < 0.001). DLR-brain provided superior visualisation of small intracranial vessels compared with DLR-body and Hybrid-IR in UHR brain CTA.

Mild to moderate COPD, vitamin D deficiency, and longitudinal bone loss: The MESA study.

Ghotbi E, Hathaway QA, Hadidchi R, Momtazmanesh S, Bancks MP, Bluemke DA, Barr RG, Post WS, Budoff M, Smith BM, Lima JAC, Demehri S

pubmed logopapersMay 29 2025
Despite the established association between chronic obstructive pulmonary disease (COPD) severity and risk of osteoporosis, even after accounting for the known shared confounding variables (e.g., age, smoking, history of exacerbations, steroid use), there is paucity of data on bone loss among mild to moderate COPD, which is more prevalent in the general population. We conducted a longitudinal analysis using data from the Multi-Ethnic Study of Atherosclerosis. Participants with chest CT at Exam 5 (2010-2012) and Exam 6 (2016-2018) were included. Mild to moderate COPD was defined as forced expiratory volume in 1 s (FEV<sub>1</sub>) to forced vital capacity ratio of <0.70 and FEV<sub>1</sub> of 50 % or higher. Vitamin D deficiency was defined as serum vitamin D < 20 ng/mL. We utilized a validated deep learning algorithm to perform automated multilevel segmentation of vertebral bodies (T1-T10) from chest CT and derive 3D volumetric thoracic vertebral BMD measurements at Exam 5 and 6. Of the 1226 participants, 173 had known mild to moderate COPD at baseline, while 1053 had no known COPD. After adjusting for age, race/ethnicity, sex, body mass, index, bisphosphonate use, alcohol consumption, smoking, diabetes, physical activity, C-reactive protein and vitamin D deficiency, mild to moderate COPD was associated with faster decline in BMD (estimated difference, β = -0.38 g/cm<sup>3</sup>/year; 95 % CI: -0.74, -0.02). A significant interaction between COPD and vitamin D deficiency (p = 0.001) prompted stratified analyses. Among participants with vitamin D deficiency (47 % of participants), COPD was associated with faster decline in BMD (-0.64 g/cm<sup>3</sup>/year; 95 % CI: -1.17 to -0.12), whereas no significant association was observed among those with normal vitamin D in both crude and adjusted models. Mild to moderate COPD is associated with longitudinal declines in vertebral BMD exclusively in participants with vitamin D deficiency over 6-year follow-up. Vitamin D deficiency may play a crucial role in bone loss among patients with mild to moderate COPD.

CT-denoimer: efficient contextual transformer network for low-dose CT denoising.

Zhang Y, Xu F, Zhang R, Guo Y, Wang H, Wei B, Ma F, Meng J, Liu J, Lu H, Chen Y

pubmed logopapersMay 29 2025
Low-dose computed tomography (LDCT) effectively reduces radiation exposure to patients, but introduces severe noise artifacts that affect diagnostic accuracy. Recently, Transformer-based network architectures have been widely applied to LDCT image denoising, generally achieving superior results compared to traditional convolutional methods. However, these methods are often hindered by high computational costs and struggles in capturing complex local contextual features, which negatively impact denoising performance. In this work, we propose CT-Denoimer, an efficient CT Denoising Transformer network that captures both global correlations and intricate, spatially varying local contextual details in CT images, enabling the generation of high-quality images. The core of our framework is a Transformer module that consists of two key components: the Multi-Dconv head Transposed Attention (MDTA) and the Mixed Contextual Feed-forward Network (MCFN). The MDTA block captures global correlations in the image with linear computational complexity, while the MCFN block manages multi-scale local contextual information, both static and dynamic, through a series of Enhanced Contextual Transformer (eCoT) modules. In addition, we incorporate Operation-Wise Attention Layers (OWALs) to enable collaborative refinement in the proposed CT-Denoimer, enhancing its ability to more effectively handle complex and varying noise patterns in LDCT images. Extensive experimental validation on both the AAPM-Mayo public dataset and a real-world clinical dataset demonstrated the state-of-the-art performance of the proposed CT-Denoimer. It achieved a peak signal-to-noise ratio (PSNR) of 33.681 dB, a structural similarity index measure (SSIM) of 0.921, an information fidelity criterion (IFC) of 2.857 and a visual information fidelity (VIF) of 0.349. Subjective assessment by radiologists gave an average score of 4.39, confirming its clinical applicability and clear advantages over existing methods. This study presents an innovative CT denoising Transformer network that sets a new benchmark in LDCT image denoising, excelling in both noise reduction and fine structure preservation.

Automated classification of midpalatal suture maturation stages from CBCTs using an end-to-end deep learning framework.

Milani OH, Mills L, Nikho A, Tliba M, Allareddy V, Ansari R, Cetin AE, Elnagar MH

pubmed logopapersMay 29 2025
Accurate classification of midpalatal suture maturation stages is critical for orthodontic diagnosis, treatment planning, and the assessment of maxillary growth. Cone Beam Computed Tomography (CBCT) imaging offers detailed insights into this craniofacial structure but poses unique challenges for deep learning image recognition model design due to its high dimensionality, noise artifacts, and variability in image quality. To address these challenges, we propose a novel technique that highlights key image features through a simple filtering process to improve image clarity prior to analysis, thereby enhancing the learning process and better aligning with the distribution of the input data domain. Our preprocessing steps include region-of-interest extraction, followed by high-pass and Sobel filtering for emphasis of low-level features. The feature extraction integrates Convolutional Neural Networks (CNN) architectures, such as EfficientNet and ResNet18, alongside our novel Multi-Filter Convolutional Residual Attention Network (MFCRAN) enhanced with Discrete Cosine Transform (DCT) layers. Moreover, to better capture the inherent order within the data classes, we augment the supervised training process with a ranking loss by attending to the relationship within the label domain. Furthermore, to adhere to diagnostic constraints while training the model, we introduce a tailored data augmentation strategy to improve classification accuracy and robustness. In order to validate our method, we employed a k-fold cross-validation protocol on a private dataset comprising 618 CBCT images, annotated into five stages (A, B, C, D, and E) by expert evaluators. The experimental results demonstrate the effectiveness of our proposed approach, achieving the highest classification accuracy of 79.02%, significantly outperforming competing architectures, which achieved accuracies ranging from 71.87 to 78.05%. This work introduces a novel and fully automated framework for midpalatal suture maturation classification, marking a substantial advancement in orthodontic diagnostics and treatment planning.

CT-Based Radiomics for Predicting PD-L1 Expression in Non-small Cell Lung Cancer: A Systematic Review and Meta-analysis.

Salimi M, Vadipour P, Khosravi A, Salimi B, Mabani M, Rostami P, Seifi S

pubmed logopapersMay 29 2025
The efficacy of immunotherapy in non-small cell lung cancer (NSCLC) is intricately associated with baseline PD-L1 expression rates. The standard method for measuring PD-L1 is immunohistochemistry, which is invasive and may not capture tumor heterogeneity. The primary aim of the current study is to assess whether CT-based radiomics models can accurately predict PD-L1 expression status in NSCLC and evaluate their quality and potential gaps in their design. Scopus, PubMed, Web of Science, Embase, and IEEE databases were systematically searched up until February 14, 2025, to retrieve relevant studies. Data from validation cohorts of models that classified patients by tumor proportion score (TPS) of 1% (TPS1) and 50% (TPS50) were extracted and analyzed separately. Quality assessment was performed through METRICS and QUADAS-2 tools. Diagnostic test accuracy meta-analysis was conducted using a bivariate random-effects approach to pool values of performance metrics. The qualitative synthesis included twenty-two studies, and the meta-analysis analyzed 11 studies with 997 individual subjects. The pooled AUC, sensitivity, and specificity of TPS1 models were 0.85, 0.76, and 0.79, respectively. The pooled AUC, sensitivity, and specificity of TPS50 models were 0.88, 0.72, and 0.86, accordingly. The QUADAS-2 tool identified a substantial risk of bias regarding the flow and timing and index test domains. Certain methodological limitations were highlighted by the METRICS score, which averaged 58.1% and ranged from 24% to 83.4%. CT-based radiomics demonstrates strong potential as a non-invasive method for predicting PD-L1 expression in NSCLC. While promising, significant methodological gaps must be addressed to achieve the generalizability and reliability required for clinical application.

Classification of biomedical lung cancer images using optimized binary bat technique by constructing oblique decision trees.

Aswal S, Ahuja NJ, Mehra R

pubmed logopapersMay 29 2025
Due to imbalanced data values and high-dimensional features of lung cancer from CT scans images creates significant challenges in clinical research. The improper classification of these images leads towards higher complexity in classification process. These critical issues compromise the extraction of biomedical traits and also design incomplete classification of lung cancer. As the conventional approaches are partially successful in dealing with the complex nature of high-dimensional and imbalanced biomedical data for lung cancer classification. Thus, there is a crucial need to develop a robust classification technique which can address these major concerns in the classification of lung cancer images. In this paper, we propose a novel structural formation of the oblique decision tree (OBT) using a swarm intelligence technique, namely, the Binary Bat Swarm Algorithm (BBSA). The application of BBSA enables a competitive recognition rate to make structural reforms while building OBT. Such integration improves the ability of the machine learning swarm classifier (MLSC) to handle high-dimensional features and imbalanced biomedical datasets. The adaptive feature selection using BBSA allows for the exploration and selection of relevant features required for classification from ODT. The ODT classifier introduces flexibility in decision boundaries, which enables it to capture complex linkages between biomedical data. The proposed MLSC model effectively handles high-dimensional, imbalanced lung cancer datasets using TCGA_LUSC_2016 and TCGA_LUAD_2016 modalities, achieving superior precision, recall, F-measure, and execution efficiency. The experiments are conducted in Python to evaluate the performance metrics that consistently demonstrate enhanced classification accuracy and reduced misclassification rates compared to existing methods. The MLSC is assessed in terms of both qualitative and quantitative measurements to study the capability of the proposed MLSC in classifying the instances more effectively than the conventional state-of-the-art methods.

Manual and automated facial de-identification techniques for patient imaging with preservation of sinonasal anatomy.

Ding AS, Nagururu NV, Seo S, Liu GS, Sahu M, Taylor RH, Creighton FX

pubmed logopapersMay 29 2025
Facial recognition of reconstructed computed tomography (CT) scans poses patient privacy risks, necessitating reliable facial de-identification methods. Current methods obscure sinuses, turbinates, and other anatomy relevant for otolaryngology. We present a facial de-identification method that preserves these structures, along with two automated workflows for large-volume datasets. A total of 20 adult head CTs from the New Mexico Decedent Image Database were included. Using 3D Slicer, a seed-growing technique was performed to label the skin around the face. This label was dilated bidirectionally to form a 6-mm mask that obscures facial features. This technique was then automated using: (1) segmentation propagation that deforms an atlas head CT and corresponding mask to match other scans and (2) a deep learning model (nnU-Net). Accuracy of these methods against manually generated masks was evaluated with Dice scores and modified Hausdorff distances (mHDs). Manual de-identification resulted in facial match rates of 45.0% (zero-fill), 37.5% (deletion), and 32.5% (re-face). Dice scores for automated face masks using segmentation propagation and nnU-Net were 0.667 ± 0.109 and 0.860 ± 0.029, respectively, with mHDs of 4.31 ± 3.04 mm and 1.55 ± 0.71 mm. Match rates after de-identification using segmentation propagation (zero-fill: 42.5%; deletion: 40.0%; re-face: 35.0%) and nnU-Net (zero-fill: 42.5%; deletion: 35.0%; re-face: 30.0%) were comparable to manual masks. We present a simple facial de-identification approach for head CTs, as well as automated methods for large-scale implementation. These techniques show promise for preventing patient identification while preserving underlying sinonasal anatomy, but further studies using live patient photographs are necessary to fully validate its effectiveness.

ImmunoDiff: A Diffusion Model for Immunotherapy Response Prediction in Lung Cancer

Moinak Bhattacharya, Judy Huang, Amna F. Sher, Gagandeep Singh, Chao Chen, Prateek Prasanna

arxiv logopreprintMay 29 2025
Accurately predicting immunotherapy response in Non-Small Cell Lung Cancer (NSCLC) remains a critical unmet need. Existing radiomics and deep learning-based predictive models rely primarily on pre-treatment imaging to predict categorical response outcomes, limiting their ability to capture the complex morphological and textural transformations induced by immunotherapy. This study introduces ImmunoDiff, an anatomy-aware diffusion model designed to synthesize post-treatment CT scans from baseline imaging while incorporating clinically relevant constraints. The proposed framework integrates anatomical priors, specifically lobar and vascular structures, to enhance fidelity in CT synthesis. Additionally, we introduce a novel cbi-Adapter, a conditioning module that ensures pairwise-consistent multimodal integration of imaging and clinical data embeddings, to refine the generative process. Additionally, a clinical variable conditioning mechanism is introduced, leveraging demographic data, blood-based biomarkers, and PD-L1 expression to refine the generative process. Evaluations on an in-house NSCLC cohort treated with immune checkpoint inhibitors demonstrate a 21.24% improvement in balanced accuracy for response prediction and a 0.03 increase in c-index for survival prediction. Code will be released soon.

Enhanced Pelvic CT Segmentation via Deep Learning: A Study on Loss Function Effects.

Ghaedi E, Asadi A, Hosseini SA, Arabi H

pubmed logopapersMay 29 2025
Effective radiotherapy planning requires precise delineation of organs at risk (OARs), but the traditional manual method is laborious and subject to variability. This study explores using convolutional neural networks (CNNs) for automating OAR segmentation in pelvic CT images, focusing on the bladder, prostate, rectum, and femoral heads (FHs) as an efficient alternative to manual segmentation. Utilizing the Medical Open Network for AI (MONAI) framework, we implemented and compared U-Net, ResU-Net, SegResNet, and Attention U-Net models and explored different loss functions to enhance segmentation accuracy. Our study involved 240 patients for prostate segmentation and 220 patients for the other organs. The models' performance was evaluated using metrics such as the Dice similarity coefficient (DSC), Jaccard index (JI), and the 95th percentile Hausdorff distance (95thHD), benchmarking the results against expert segmentation masks. SegResNet outperformed all models, achieving DSC values of 0.951 for the bladder, 0.829 for the prostate, 0.860 for the rectum, 0.979 for the left FH, and 0.985 for the right FH (p < 0.05 vs. U-Net and ResU-Net). Attention U-Net also excelled, particularly for bladder and rectum segmentation. Experiments with loss functions on SegResNet showed that Dice loss consistently delivered optimal or equivalent performance across OARs, while DiceCE slightly enhanced prostate segmentation (DSC = 0.845, p = 0.0138). These results indicate that advanced CNNs, especially SegResNet, paired with optimized loss functions, provide a reliable, efficient alternative to manual methods, promising improved precision in radiotherapy planning.

Deep Learning-Based Fully Automated Aortic Valve Leaflets and Root Measurement From Computed Tomography Images - A Feasibility Study.

Yamauchi H, Aoyama G, Tsukihara H, Ino K, Tomii N, Takagi S, Fujimoto K, Sakaguchi T, Sakuma I, Ono M

pubmed logopapersMay 28 2025
The aim of this study was to retrain our existing deep learning-based fully automated aortic valve leaflets/root measurement algorithm, using computed tomography (CT) data for root dilatation (RD), and assess its clinical feasibility. 67 ECG-gated cardiac CT scans were retrospectively collected from 40 patients with RD to retrain the algorithm. An additional 100 patients' CT data with aortic stenosis (AS, n=50) and aortic regurgitation (AR) with/without RD (n=50) were collected to evaluate the algorithm. 45 AR patients had RD. The algorithm provided patient-specific 3-dimensional aortic valve/root visualization. The measurements of 100 cases automatically obtained by the algorithm were compared with an expert's manual measurements. Overall, there was a moderate-to-high correlation, with differences of 6.1-13.4 mm<sup>2</sup>for the virtual basal ring area, 1.1-2.6 mm for sinus diameter, 0.1-0.6 mm for coronary artery height, 0.2-0.5 mm for geometric height, and 0.9 mm for effective height, except for the sinotubular junction of the AR cases (10.3 mm) with an indefinite borderline over the dilated sinuses, compared with 2.1 mm in AS cases. The measurement time (122 s) per case by the algorithm was significantly shorter than those of the experts (618-1,126 s). This fully automated algorithm can assist in evaluating aortic valve/root anatomy for planning surgical and transcatheter treatments while saving time and minimizing workload.
Page 121 of 1411403 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.