Sort by:
Page 115 of 1521519 results

Structural alterations as a predictor of depression - a 7-Tesla MRI-based multidimensional approach.

Schnellbächer GJ, Rajkumar R, Veselinović T, Ramkiran S, Hagen J, Collee M, Shah NJ, Neuner I

pubmed logopapersJun 1 2025
Major depressive disorder (MDD) is a debilitating condition that is associated with changes in the default-mode network (DMN). Commonly reported features include alterations in gray matter volume (GMV), cortical thickness (CoT), and gyrification. A comprehensive examination of these variables using ultra-high field strength MRI and machine learning methods may lead to novel insights into the pathophysiology of depression and help develop a more personalized therapy. Cerebral images were obtained from 41 patients with confirmed MDD and 41 healthy controls, matched for age and gender, using a 7-T-MRI. DMN parcellation followed the Schaefer 600 Atlas. Based on the results of a mixed-model repeated measures analysis, a support vector machine (SVM) calculation followed by leave-one-out cross-validation determined the predictive ability of structural features for the presence of MDD. A consecutive permutation procedure identified which areas contributed to the classification results. Correlating changes in those areas with BDI-II and AMDP scores added an explanatory aspect to this study. CoT did not delineate relevant changes in the mixed model and was excluded from further analysis. The SVM achieved a good prediction accuracy of 0.76 using gyrification data. GMV was not a viable predictor for disease presence, however, it correlated in the left parahippocampal gyrus with disease severity as measured by the BDI-II. Structural data of the DMN may therefore contain the necessary information to predict the presence of MDD. However, there may be inherent challenges with predicting disease course or treatment response due to high GMV variance and the static character of gyrification. Further improvements in data acquisition and analysis may help to overcome these difficulties.

Regions of interest in opportunistic computed tomography-based screening for osteoporosis: impact on short-term in vivo precision.

Park J, Kim Y, Hong S, Chee CG, Lee E, Lee JW

pubmed logopapersJun 1 2025
To determine an optimal region of interest (ROI) for opportunistic screening of osteoporosis in terms of short-term in vivo diagnostic precision. We included patients who underwent two CT scans and one dual-energy X-ray absorptiometry scan within a month in 2022. Deep-learning software automatically measured the attenuation in L1 using 54 ROIs (three slice thicknesses × six shapes × three intravertebral levels). To identify factors associated with a lower attenuation difference between the two CT scans, mixed-effect model analysis was performed with ROI-level (slice thickness, shape, intravertebral levels) and patient-level (age, sex, patient diameter, change in CT machine) factors. The root-mean-square standard deviation (RMSSD) and area under the receiver-operating-characteristic curve (AUROC) were calculated. In total, 73 consecutive patients (mean age ± standard deviation, 69 ± 9 years, 38 women) were included. A lower attenuation difference was observed in ROIs in images with slice thicknesses of 1 and 3 mm than that in images with a slice thickness of 5 mm (p < .001), in large elliptical ROIs (p = .007 or < .001, respectively), and in mid- or cranial-level ROIs than that in caudal-level ROIs (p < .001). No patient-level factors were significantly associated with the attenuation difference. Large, elliptical ROIs placed at the mid-level of L1 on images with 1- or 3-mm slice thicknesses yielded RMSSDs of 12.4-12.5 HU and AUROCs of 0.90. The largest possible regions of interest drawn in the mid-level trabecular portion of the L1 vertebra on thin-slice images may yield improvements in the precision of opportunistic screening for osteoporosis via CT.

Impact of deep learning reconstruction on radiation dose reduction and cancer risk in CT examinations: a real-world clinical analysis.

Kobayashi N, Nakaura T, Yoshida N, Nagayama Y, Kidoh M, Uetani H, Sakabe D, Kawamata Y, Funama Y, Tsutsumi T, Hirai T

pubmed logopapersJun 1 2025
The purpose of this study is to estimate the extent to which the implementation of deep learning reconstruction (DLR) may reduce the risk of radiation-induced cancer from CT examinations, utilizing real-world clinical data. We retrospectively analyzed scan data of adult patients who underwent body CT during two periods relative to DLR implementation at our facility: a 12-month pre-DLR phase (n = 5553) using hybrid iterative reconstruction and a 12-month post-DLR phase (n = 5494) with routine CT reconstruction transitioning to DLR. To ensure comparability between two groups, we employed propensity score matching 1:1 based on age, sex, and body mass index. Dose data were collected to estimate organ-specific equivalent doses and total effective doses. We assessed the average dose reduction post-DLR implementation and estimated the Lifetime Attributable Risk (LAR) for cancer per CT exam pre- and post-DLR implementation. The number of radiation-induced cancers before and after the implementation of DLR was also estimated. After propensity score matching, 5247 cases from each group were included in the final analysis. Post-DLR, the total effective body CT dose significantly decreased to 15.5 ± 10.3 mSv from 28.1 ± 14.0 mSv pre-DLR (p < 0.001), a 45% reduction. This dose reduction significantly lowered the radiation-induced cancer risk, especially among younger women, with the estimated annual cancer incidence from 0.247% pre-DLR to 0.130% post-DLR. The implementation of DLR has the possibility to reduce radiation dose by 45% and the risk of radiation-induced cancer from 0.247 to 0.130% as compared with the iterative reconstruction. Question Can implementing deep learning reconstruction (DLR) in routine CT scans significantly reduce radiation dose and the risk of radiation-induced cancer compared to hybrid iterative reconstruction? Findings DLR reduced the total effective body CT dose by 45% (from 28.1 ± 14.0 mSv to 15.5 ± 10.3 mSv) and decreased estimated cancer incidence from 0.247 to 0.130%. Clinical relevance Adopting DLR in clinical practice substantially lowers radiation exposure and cancer risk from CT exams, enhancing patient safety, especially for younger women, and underscores the importance of advanced imaging techniques.

Prediction of therapeutic response to transarterial chemoembolization plus systemic therapy regimen in hepatocellular carcinoma using pretreatment contrast-enhanced MRI based habitat analysis and Crossformer model.

Zhu Y, Liu T, Chen J, Wen L, Zhang J, Zheng D

pubmed logopapersJun 1 2025
To develop habitat and deep learning (DL) models from multi-phase contrast-enhanced magnetic resonance imaging (CE-MRI) habitat images categorized using the K-means clustering algorithm. Additionally, we aim to assess the predictive value of identified regions for early evaluation of the responsiveness of hepatocellular carcinoma (HCC) patients to treatment with transarterial chemoembolization (TACE) plus molecular targeted therapies (MTT) and anti-PD-(L)1. A total of 102 patients with HCC from two institutions (A, n = 63 and B, n = 39) who received TACE plus systemic therapy were enrolled from September 2020 to January 2024. Multiple CE-MRI sequences were used to outline 3D volumes of interest (VOI) of the lesion. Subsequently, K-means clustering was applied to categorize intratumoral voxels into three distinct subgroups, based on signal intensity values of images. Using data from institution A, the habitat model was built with the ExtraTrees classifier after extracting radiomics features from intratumoral habitats. Similarly, the Crossformer model and ResNet50 model were trained on multi-channel data in institution A, and a DL model with Transformer-based aggregation was constructed to predict the response. Finally, all models underwent validation at institution B. The Crossformer model and the habitat model both showed high area under the receiver operating characteristic curves (AUCs) of 0.869 and 0.877 (training cohort). In validation, AUC was 0.762 for the Crossformer model and 0.721 for the habitat model. The habitat model and DL model based on CE-MRI possesses the capability to non-invasively predict the efficacy of TACE plus systemic therapy in HCC patients, which is critical for precision treatment and patient outcomes.

Deep learning-enhanced zero echo time MRI for glenohumeral assessment in shoulder instability: a comparative study with CT.

Carretero-Gómez L, Fung M, Wiesinger F, Carl M, McKinnon G, de Arcos J, Mandava S, Arauz S, Sánchez-Lacalle E, Nagrani S, López-Alcorocho JM, Rodríguez-Íñigo E, Malpica N, Padrón M

pubmed logopapersJun 1 2025
To evaluate image quality and lesion conspicuity of zero echo time (ZTE) MRI reconstructed with deep learning (DL)-based algorithm versus conventional reconstruction and to assess DL ZTE performance against CT for bone loss measurements in shoulder instability. Forty-four patients (9 females; 33.5 ± 15.65 years) with symptomatic anterior glenohumeral instability and no previous shoulder surgery underwent ZTE MRI and CT on the same day. ZTE images were reconstructed with conventional and DL methods and post-processed for CT-like contrast. Two musculoskeletal radiologists, blinded to the reconstruction method, independently evaluated 20 randomized MR ZTE datasets with and without DL-enhancement for perceived signal-to-noise ratio, resolution, and lesion conspicuity at humerus and glenoid using a 4-point Likert scale. Inter-reader reliability was assessed using weighted Cohen's kappa (K). An ordinal logistic regression model analyzed Likert scores, with the reconstruction method (DL-enhanced vs. conventional) as the predictor. Glenoid track (GT) and Hill-Sachs interval (HSI) measurements were performed by another radiologist on both DL ZTE and CT datasets. Intermodal agreement was assessed through intraclass correlation coefficients (ICCs) and Bland-Altman analysis. DL ZTE MR bone images scored higher than conventional ZTE across all items, with significantly improved perceived resolution (odds ratio (OR) = 7.67, p = 0.01) and glenoid lesion conspicuity (OR = 25.12, p = 0.01), with substantial inter-rater agreement (K = 0.61 (0.38-0.83) to 0.77 (0.58-0.95)). Inter-modality assessment showed almost perfect agreement between DL ZTE MR and CT for all bone measurements (overall ICC = 0.99 (0.97-0.99)), with mean differences of 0.08 (- 0.80 to 0.96) mm for GT and - 0.07 (- 1.24 to 1.10) mm for HSI. DL-based reconstruction enhances ZTE MRI quality for glenohumeral assessment, offering osseous evaluation and quantification equivalent to gold-standard CT, potentially simplifying preoperative workflow, and reducing CT radiation exposure.

Treatment Response Assessment According to Updated PROMISE Criteria in Patients with Metastatic Prostate Cancer Using an Automated Imaging Platform for Identification, Measurement, and Temporal Tracking of Disease.

Benitez CM, Sahlstedt H, Sonni I, Brynolfsson J, Berenji GR, Juarez JE, Kane N, Tsai S, Rettig M, Nickols NG, Duriseti S

pubmed logopapersJun 1 2025
Prostate-specific membrane antigen (PSMA) molecular imaging is widely used for disease assessment in prostate cancer (PC). Artificial intelligence (AI) platforms such as automated Prostate Cancer Molecular Imaging Standardized Evaluation (aPROMISE) identify and quantify locoregional and distant disease, thereby expediting lesion identification and standardizing reporting. Our aim was to evaluate the ability of the updated aPROMISE platform to assess treatment responses based on integration of the RECIP (Response Evaluation Criteria in PSMA positron emission tomography-computed tomography [PET/CT]) 1.0 classification. The study included 33 patients with castration-sensitive PC (CSPC) and 34 with castration-resistant PC (CRPC) who underwent PSMA-targeted molecular imaging before and ≥2 mo after completion of treatment. Tracer-avid lesions were identified using aPROMISE for pretreatment and post-treatment PET/CT scans. Detected lesions were manually approved by an experienced nuclear medicine physician, and total tumor volume (TTV) was calculated. Response was assessed according to RECIP 1.0 as CR (complete response), PR (partial response), PD (progressive disease), or SD (stable disease). KEY FINDINGS AND LIMITATIONS: aPROMISE identified 1576 lesions on baseline scans and 1631 lesions on follow-up imaging, 618 (35%) of which were new. Of the 67 patients, aPROMISE classified four as CR, 16 as PR, 34 as SD, and 13 as PD; five cases were misclassified. The agreement between aPROMISE and clinician validation was 89.6% (κ = 0.79). aPROMISE may serve as a novel assessment tool for treatment response that integrates PSMA PET/CT results and RECIP imaging criteria. The precision and accuracy of this automated process should be validated in prospective clinical studies. We used an artificial intelligence (AI) tool to analyze scans for prostate cancer before and after treatment to see if we could track how cancer spots respond to treatment. We found that the AI approach was successful in tracking individual tumor changes, showing which tumors disappeared, and identifying new tumors in response to prostate cancer treatment.

Applying Deep-Learning Algorithm Interpreting Kidney, Ureter, and Bladder (KUB) X-Rays to Detect Colon Cancer.

Lee L, Lin C, Hsu CJ, Lin HH, Lin TC, Liu YH, Hu JM

pubmed logopapersJun 1 2025
Early screening is crucial in reducing the mortality of colorectal cancer (CRC). Current screening methods, including fecal occult blood tests (FOBT) and colonoscopy, are primarily limited by low patient compliance and the invasive nature of the procedures. Several advanced imaging techniques such as computed tomography (CT) and histological imaging have been integrated with artificial intelligence (AI) to enhance the detection of CRC. There are still limitations because of the challenges associated with image acquisition and the cost. Kidney, ureter, and bladder (KUB) radiograph which is inexpensive and widely used for abdominal assessments in emergency settings and shows potential for detecting CRC when enhanced using advanced techniques. This study aimed to develop a deep learning model (DLM) to detect CRC using KUB radiographs. This retrospective study was conducted using data from the Tri-Service General Hospital (TSGH) between January 2011 and December 2020, including patients with at least one KUB radiograph. Patients were divided into development (n = 28,055), tuning (n = 11,234), and internal validation (n = 16,875) sets. An additional 15,876 patients were collected from a community hospital as the external validation set. A 121-layer DenseNet convolutional network was trained to classify KUB images for CRC detection. The model performance was evaluated using receiver operating characteristic curves, with sensitivity, specificity, and area under the curve (AUC) as metrics. The AUC, sensitivity, and specificity of the DLM in the internal and external validation sets achieved 0.738, 61.3%, and 74.4%, as well as 0.656, 47.7%, and 72.9%, respectively. The model performed better for high-grade CRC, with AUCs of 0.744 and 0.674 in the internal and external sets, respectively. Stratified analysis showed superior performance in females aged 55-64 with high-grade cancers. AI-positive predictions were associated with a higher long-term risk of all-cause mortality in both validation cohorts. AI-enhanced KUB X-ray analysis can enhance CRC screening coverage and effectiveness, providing a cost-effective alternative to traditional methods. Further prospective studies are necessary to validate these findings and fully integrate this technology into clinical practice.

Utilizing Pseudo Color Image to Improve the Performance of Deep Transfer Learning-Based Computer-Aided Diagnosis Schemes in Breast Mass Classification.

Jones MA, Zhang K, Faiz R, Islam W, Jo J, Zheng B, Qiu Y

pubmed logopapersJun 1 2025
The purpose of this study is to investigate the impact of using morphological information in classifying suspicious breast lesions. The widespread use of deep transfer learning can significantly improve the performance of the mammogram based CADx schemes. However, digital mammograms are grayscale images, while deep learning models are typically optimized using the natural images containing three channels. Thus, it is needed to convert the grayscale mammograms into three channel images for the input of deep transfer models. This study aims to develop a novel pseudo color image generation method which utilizes the mass contour information to enhance the classification performance. Accordingly, a total of 830 breast cancer cases were retrospectively collected, which contains 310 benign and 520 malignant cases, respectively. For each case, a total of four regions of interest (ROI) are collected from the grayscale images captured for both the CC and MLO views of the two breasts. Meanwhile, a total of seven pseudo color image sets are generated as the input of the deep learning models, which are created through a combination of the original grayscale image, a histogram equalized image, a bilaterally filtered image, and a segmented mass. Accordingly, the output features from four identical pre-trained deep learning models are concatenated and then processed by a support vector machine-based classifier to generate the final benign/malignant labels. The performance of each image set was evaluated and compared. The results demonstrate that the pseudo color sets containing the manually segmented mass performed significantly better than all other pseudo color sets, which achieved an AUC (area under the ROC curve) up to 0.889 ± 0.012 and an overall accuracy up to 0.816 ± 0.020, respectively. At the same time, the performance improvement is also dependent on the accuracy of the mass segmentation. The results of this study support our hypothesis that adding accurately segmented mass contours can provide complementary information, thereby enhancing the performance of the deep transfer model in classifying suspicious breast lesions.

An Adaptive SCG-ECG Multimodal Gating Framework for Cardiac CTA.

Ganesh S, Abozeed M, Aziz U, Tridandapani S, Bhatti PT

pubmed logopapersJun 1 2025
Cardiovascular disease (CVD) is the leading cause of death worldwide. Coronary artery disease (CAD), a prevalent form of CVD, is typically assessed using catheter coronary angiography (CCA), an invasive, costly procedure with associated risks. While cardiac computed tomography angiography (CTA) presents a less invasive alternative, it suffers from limited temporal resolution, often resulting in motion artifacts that degrade diagnostic quality. Traditional ECG-based gating methods for CTA inadequately capture cardiac mechanical motion. To address this, we propose a novel multimodal approach that enhances CTA imaging by predicting cardiac quiescent periods using seismocardiogram (SCG) and ECG data, integrated through a weighted fusion (WF) approach and artificial neural networks (ANNs). We developed a regression-based ANN framework (r-ANN WF) designed to improve prediction accuracy and reduce computational complexity, which was compared with a classification-based framework (c-ANN WF), ECG gating, and US data. Our results demonstrate that the r-ANN WF approach improved overall diastolic and systolic cardiac quiescence prediction accuracy by 52.6% compared to ECG-based predictions, using ultrasound (US) as the ground truth, with an average prediction time of 4.83 ms. Comparative evaluations based on reconstructed CTA images show that both r-ANN WF and c-ANN WF offer diagnostic quality comparable to US-based gating, underscoring their clinical potential. Additionally, the lower computational complexity of r-ANN WF makes it suitable for real-time applications. This approach could enhance CTA's diagnostic quality, offering a more accurate and efficient method for CVD diagnosis and management.

Deep Learning-Based Estimation of Radiographic Position to Automatically Set Up the X-Ray Prime Factors.

Del Cerro CF, Giménez RC, García-Blas J, Sosenko K, Ortega JM, Desco M, Abella M

pubmed logopapersJun 1 2025
Radiation dose and image quality in radiology are influenced by the X-ray prime factors: KVp, mAs, and source-detector distance. These parameters are set by the X-ray technician prior to the acquisition considering the radiographic position. A wrong setting of these parameters may result in exposure errors, forcing the test to be repeated with the increase of the radiation dose delivered to the patient. This work presents a novel approach based on deep learning that automatically estimates the radiographic position from a photograph captured prior to X-ray exposure, which can then be used to select the optimal prime factors. We created a database using 66 radiographic positions commonly used in clinical settings, prospectively obtained during 2022 from 75 volunteers in two different X-ray facilities. The architecture for radiographic position classification was a lightweight version of ConvNeXt trained with fine-tuning, discriminative learning rates, and a one-cycle policy scheduler. Our resulting model achieved an accuracy of 93.17% for radiographic position classification and increased to 95.58% when considering the correct selection of prime factors, since half of the errors involved positions with the same KVp and mAs values. Most errors occurred for radiographic positions with similar patient pose in the photograph. Results suggest the feasibility of the method to facilitate the acquisition workflow reducing the occurrence of exposure errors while preventing unnecessary radiation dose delivered to patients.
Page 115 of 1521519 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.