Sort by:
Page 41 of 1411405 results

Machine Learning Methods Based on Chest CT for Predicting the Risk of COVID-19-Associated Pulmonary Aspergillosis.

Liu J, Zhang J, Wang H, Fang C, Wei L, Chen J, Li M, Wu S, Zeng Q

pubmed logopapersJun 1 2025
To develop and validate a machine learning model based on chest CT and clinical risk factors to predict secondary aspergillus infection in hospitalized COVID-19 patients. This retrospective study included 291 COVID-19 patients with complete clinical data between December 2022 and March 2024, and some (n=82) of them developed secondary aspergillus infection after admission. Patients were divided into training (n=162), internal validation (n=69) and external validation (n=60) cohorts. The least absolute shrinkage and selection operator regression was applied to select the most significant image features extracted from chest CT. Univariate and multivariate logistic regression analyses were performed to develop a multifactorial model, which integrated chest CT with clinical risk factors, to predict secondary aspergillus infection in hospitalized COVID-19 patients. The performance of the constructed models was assessed with the receiver operating characteristic curve and the area under the curve (AUC). The clinical application value of the models was comprehensively evaluated using decision curve analysis (DCA). Eleven radiomics features and seven clinical risk factors were selected to develop prediction models. The multifactorial model demonstrated a favorable predictive performance with the highest AUC values of 0.98 (95% CI, 0.96-1.00) in the training cohort, 0.98 (95% CI, 0.96-1.00) in the internal validation cohort, and 0.87 (95% CI, 0.75-0.99) in the external validation cohort, which was significantly superior to the models relied solely on chest CT or clinical risk factors. The calibration curves from Hosmer-Lemeshow tests showed that there were no significant differences in the training cohort (p=0.359) and internal validation cohort (p=0.941), suggesting the good performance of the multifactorial model. DCA indicated that the multifactorial model exhibited better performance than others. The multifactorial model can serve as a reliable tool for predicting the risk of COVID-19-associated pulmonary aspergillosis.

GDP-Net: Global Dependency-Enhanced Dual-Domain Parallel Network for Ring Artifact Removal.

Zhang Y, Liu G, Liu Y, Xie S, Gu J, Huang Z, Ji X, Lyu T, Xi Y, Zhu S, Yang J, Chen Y

pubmed logopapersJun 1 2025
In Computed Tomography (CT) imaging, the ring artifacts caused by the inconsistent detector response can significantly degrade the reconstructed images, having negative impacts on the subsequent applications. The new generation of CT systems based on photon-counting detectors are affected by ring artifacts more severely. The flexibility and variety of detector responses make it difficult to build a well-defined model to characterize the ring artifacts. In this context, this study proposes the global dependency-enhanced dual-domain parallel neural network for Ring Artifact Removal (RAR). First, based on the fact that the features of ring artifacts are different in Cartesian and Polar coordinates, the parallel architecture is adopted to construct the deep neural network so that it can extract and exploit the latent features from different domains to improve the performance of ring artifact removal. Besides, the ring artifacts are globally relevant whether in Cartesian or Polar coordinate systems, but convolutional neural networks show inherent shortcomings in modeling long-range dependency. To tackle this problem, this study introduces the novel Mamba mechanism to achieve a global receptive field without incurring high computational complexity. It enables effective capture of the long-range dependency, thereby enhancing the model performance in image restoration and artifact reduction. The experiments on the simulated data validate the effectiveness of the dual-domain parallel neural network and the Mamba mechanism, and the results on two unseen real datasets demonstrate the promising performance of the proposed RAR algorithm in eliminating ring artifacts and recovering image details.

Evolution of Cortical Lesions and Function-Specific Cognitive Decline in People With Multiple Sclerosis.

Krijnen EA, Jelgerhuis J, Van Dam M, Bouman PM, Barkhof F, Klawiter EC, Hulst HE, Strijbis EMM, Schoonheim MM

pubmed logopapersJun 1 2025
Cortical lesions in multiple sclerosis (MS) severely affect cognition, but their longitudinal evolution and impact on specific cognitive functions remain understudied. This study investigates the evolution of function-specific cognitive functioning over 10 years in people with MS and assesses the influence of cortical lesion load and formation on these trajectories. In this prospectively collected study, people with MS underwent 3T MRI (T1 and fluid-attenuated inversion recovery) at 3 study visits between 2008 and 2022. Cognitive functioning was evaluated based on neuropsychological assessment reflecting 7 cognitive functions: attention; executive functioning (EF); information processing speed (IPS); verbal fluency; and verbal, visuospatial, and working memory. Cortical lesions were manually identified on artificial intelligence-generated double-inversion recovery images. Linear mixed models were constructed to assess the temporal evolution between cortical lesion load and function-specific cognitive decline. In addition, analyses were stratified by MS disease stage: early and late relapsing-remitting MS (cutoff disease duration at 15 years) and progressive MS. The study included 223 people with MS (mean age, 47.8 ± 11.1 years; 153 women) and 62 healthy controls. All completed 5-year follow-up, and 37 healthy controls and 94 with MS completed 10-year follow-up. At baseline, people with MS exhibited worse functioning of IPS and working memory. Over 10 years, cognitive decline was most severe in attention, verbal memory, and EF. At baseline, people with MS had a median cortical lesion count of 7 (range 0-73), which was related to subsequent decline in attention (B[95% CI] = -0.22 [-0.40 to -0.03]) and verbal fluency (B[95% CI] = -0.23[-0.37 to -0.09]). Over time, cortical lesions increased by a median count of 4 (range -2 to 71), particularly in late and progressive disease, and was related to decline in verbal fluency (B [95% CI] = -0.33 [-0.51 to -0.15]). The associations between (change in) cortical lesion load and cognitive decline were not modified by MS disease stage. Cognition worsened over 10 years, particularly affecting attention, verbal memory, and EF, while preexisting impairments were worst in other functions such as IPS. Worse baseline cognitive functioning was related to baseline cortical lesions, whereas baseline cortical lesions and cortical lesion formation were related to cognitive decline in functions less affected at baseline. Accumulating cortical damage leads to spreading of cognitive impairments toward additional functions.

Information Geometric Approaches for Patient-Specific Test-Time Adaptation of Deep Learning Models for Semantic Segmentation.

Ravishankar H, Paluru N, Sudhakar P, Yalavarthy PK

pubmed logopapersJun 1 2025
The test-time adaptation (TTA) of deep-learning-based semantic segmentation models, specific to individual patient data, was addressed in this study. The existing TTA methods in medical imaging are often unconstrained, require anatomical prior information or additional neural networks built during training phase, making them less practical, and prone to performance deterioration. In this study, a novel framework based on information geometric principles was proposed to achieve generic, off-the-shelf, regularized patient-specific adaptation of models during test-time. By considering the pre-trained model and the adapted models as part of statistical neuromanifolds, test-time adaptation was treated as constrained functional regularization using information geometric measures, leading to improved generalization and patient optimality. The efficacy of the proposed approach was shown on three challenging problems: 1) improving generalization of state-of-the-art models for segmenting COVID-19 anomalies in Computed Tomography (CT) images 2) cross-institutional brain tumor segmentation from magnetic resonance (MR) images, 3) segmentation of retinal layers in Optical Coherence Tomography (OCT) images. Further, it was demonstrated that robust patient-specific adaptation can be achieved without adding significant computational burden, making it first of its kind based on information geometric principles.

A new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation.

Sagberg K, Lie T, F Peterson H, Hillestad V, Eskild A, Bø LE

pubmed logopapersJun 1 2025
Placental volume measurements can potentially identify high-risk pregnancies. We aimed to develop and validate a new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation. We included 43 pregnancies at gestational week 27 and acquired placental images using a 2D ultrasound probe with position tracking, and trained a convolutional neural network (CNN) for automatic image segmentation. The automatically segmented 2D images were combined with tracking data to calculate placental volume. For 15 of the included pregnancies, placental volume was also estimated based on MRI examinations, 3D ultrasound and manually segmented 2D ultrasound images. The ultrasound methods were compared to MRI (gold standard). The CNN demonstrated good performance in automatic image segmentation (F1-score 0.84). The correlation with MRI-based placental volume was similar for tracked 2D ultrasound using automatically segmented images (absolute agreement intraclass correlation coefficient [ICC] 0.58, 95% CI 0.13-0.84) and manually segmented images (ICC 0.59, 95% CI 0.13-0.84). The 3D ultrasound method showed lower ICC (0.35, 95% CI -0.11 to 0.74) than the methods based on tracked 2D ultrasound. Tracked 2D ultrasound with automatic image segmentation is a promising new method for placental volume measurements and has potential for further improvement.

Cardiac Phase Estimation Using Deep Learning Analysis of Pulsed-Mode Projections: Toward Autonomous Cardiac CT Imaging.

Wu P, Haneda E, Pack JD, Heukensfeldt Jansen I, Hsiao A, McVeigh E, De Man B

pubmed logopapersJun 1 2025
Cardiac CT plays an important role in diagnosing heart diseases but is conventionally limited by its complex workflow that requires dedicated phase and bolus tracking devices [e.g., electrocardiogram (ECG) gating]. This work reports first progress towards robust and autonomous cardiac CT exams through joint deep learning (DL) and analytical analysis of pulsed-mode projections (PMPs). To this end, cardiac phase and its uncertainty were simultaneously estimated using a novel projection domain cardiac phase estimation network (PhaseNet), which utilizes sliding-window multi-channel feature extraction strategy and a long short-term memory (LSTM) block to extract temporal correlation between time-distributed PMPs. An uncertainty-driven Viterbi (UDV) regularizer was developed to refine the DL estimations at each time point through dynamic programming. Stronger regularization was performed at time points where DL estimations have higher uncertainty. The performance of the proposed phase estimation pipeline was evaluated using accurate physics-based emulated data. PhaseNet achieved improved phase estimation accuracy compared to the competing methods in terms of RMSE (~50% improvement vs. standard CNN-LSTM; ~24% improvement vs. multi-channel residual network). The added UDV regularizer resulted in an additional ~14% improvement in RMSE, achieving accurate phase estimation with <6% RMSE in cardiac phase (phase ranges from 0-100%). To our knowledge, this is the first publication of prospective cardiac phase estimation in the projection domain. Combined with our previous work on PMP-based bolus curve estimation, the proposed method could potentially be used to achieve autonomous cardiac scanning without ECG device and expert-in-the-loop bolus timing.

Coarse for Fine: Bounding Box Supervised Thyroid Ultrasound Image Segmentation Using Spatial Arrangement and Hierarchical Prediction Consistency.

Chi J, Lin G, Li Z, Zhang W, Chen JH, Huang Y

pubmed logopapersJun 1 2025
Weakly-supervised learning methods have become increasingly attractive for medical image segmentation, but suffered from a high dependence on quantifying the pixel-wise affinities of low-level features, which are easily corrupted in thyroid ultrasound images, resulting in segmentation over-fitting to weakly annotated regions without precise delineation of target boundaries. We propose a dual-branch weakly-supervised learning framework to optimize the backbone segmentation network by calibrating semantic features into rational spatial distribution under the indirect, coarse guidance of the bounding box mask. Specifically, in the spatial arrangement consistency branch, the maximum activations sampled from the preliminary segmentation prediction and the bounding box mask along the horizontal and vertical dimensions are compared to measure the rationality of the approximate target localization. In the hierarchical prediction consistency branch, the target and background prototypes are encapsulated from the semantic features under the combined guidance of the preliminary segmentation prediction and the bounding box mask. The secondary segmentation prediction induced from the prototypes is compared with the preliminary prediction to quantify the rationality of the elaborated target and background semantic feature perception. Experiments on three thyroid datasets illustrate that our model outperforms existing weakly-supervised methods for thyroid gland and nodule segmentation and is comparable to the performance of fully-supervised methods with reduced annotation time. The proposed method has provided a weakly-supervised segmentation strategy by simultaneously considering the target's location and the rationality of target and background semantic features distribution. It can improve the applicability of deep learning based segmentation in the clinical practice.

Enhancing Pathological Complete Response Prediction in Breast Cancer: The Added Value of Pretherapeutic Contrast-Enhanced Cone Beam Breast CT Semantic Features.

Wang Y, Ma Y, Wang F, Liu A, Zhao M, Bian K, Zhu Y, Yin L, Ye Z

pubmed logopapersJun 1 2025
To explore the association between pretherapeutic contrast-enhanced cone beam breast CT (CE-CBBCT) features and pathological complete response (pCR), and to develop a predictive model that integrates clinicopathological and imaging features. In this prospective study, a cohort of 200 female patients who underwent CE-CBBCT prior to neoadjuvant therapy and surgery was divided into train (n=150) and test (n=50) sets in a 3:1 ratio. Optimal predictive features were identified using univariate logistic regression and recursive feature elimination with cross-validation (RFECV). Models were constructed using XGBoost and evaluated through the receiver operating characteristic (ROC) curve, calibration curves, and decision curve analysis. The performance of combined model was further evaluated across molecular subtypes. Feature significance within the combined model was determined using the SHapley Additive exPlanation (SHAP) algorithm. The model incorporating three clinicopathological and six CE-CBBCT imaging features demonstrated robust predictive performance for pCR, with area under curves (AUCs) of 0.924 in the train set and 0.870 in the test set. Molecular subtype, spiculation, and adjacent vascular sign (AVS) grade emerged as the most influential SHAP features. The highest AUCs were observed for HER2-positive subgroup (train: 0.935; test: 0.844), followed by luminal (train: 0.841; test: 0.717) and triple-negative breast cancer (TNBC; train: 0.760; test: 0.583). SHAP analysis indicated that spiculation was crucial for luminal breast cancer prediction, while AVS grade was critical for HER2-positive and TNBC cases. Integrating clinicopathological and CE-CBBCT imaging features enhanced pCR prediction accuracy, particularly in HER2-positive cases, underscoring its potential clinical applicability.

A Multimodal Model Based on Transvaginal Ultrasound-Based Radiomics to Predict the Risk of Peritoneal Metastasis in Ovarian Cancer: A Multicenter Study.

Zhou Y, Duan Y, Zhu Q, Li S, Zhang C

pubmed logopapersJun 1 2025
This study aimed to develop a predictive model for peritoneal metastasis (PM) in ovarian cancer using a combination radiomics and clinical biomarkers to improve diagnostic accuracy. This retrospective cohort study of 619 ovarian cancer patients involved demographic data, radiomics, O-RADS standardized description, clinical biomarkers, and histological findings. Radiomics features were extracted using 3D Slicer and Pyradiomics, with selective feature extraction using Least Absolute Shrinkage and Selection Operator regression. Model development and validation were carried out using logistic regression and machine learning methods RESULTS: Interobserver agreement was high for radiomics features, with 1049 features initially extracted and 7 features selected through regression analysis. Multi-modal information such as Ascites, Fallopian tube invasion, Greatest diameter, HE4 and D-dimer levels were significant predictors of PM. The developed radiomics nomogram demonstrated strong discriminatory power, with AUC values of 0.912, 0.883, and 0.831 in the training, internal test, and external test sets respectively. The nomogram displayed superior diagnostic performance compared to single-modality models. The integration of multimodal information in a predictive model for PM in ovarian cancer shows promise for enhancing diagnostic accuracy and guiding personalized treatment. This multi-modal approach offers a potential strategy for improving patient outcomes in ovarian cancer management with PM.

Accelerated High-resolution T1- and T2-weighted Breast MRI with Deep Learning Super-resolution Reconstruction.

Mesropyan N, Katemann C, Leutner C, Sommer A, Isaak A, Weber OM, Peeters JM, Dell T, Bischoff L, Kuetting D, Pieper CC, Lakghomi A, Luetkens JA

pubmed logopapersJun 1 2025
To assess the performance of an industry-developed deep learning (DL) algorithm to reconstruct low-resolution Cartesian T1-weighted dynamic contrast-enhanced (T1w) and T2-weighted turbo-spin-echo (T2w) sequences and compare them to standard sequences. Female patients with indications for breast MRI were included in this prospective study. The study protocol at 1.5 Tesla MRI included T1w and T2w. Both sequences were acquired in standard resolution (T1<sub>S</sub> and T2<sub>S</sub>) and in low-resolution with following DL reconstructions (T1<sub>DL</sub> and T2<sub>DL</sub>). For DL reconstruction, two convolutional networks were used: (1) Adaptive-CS-Net for denoising with compressed sensing, and (2) Precise-Image-Net for resolution upscaling of previously downscaled images. Overall image quality was assessed using 5-point-Likert scale (from 1=non-diagnostic to 5=excellent). Apparent signal-to-noise (aSNR) and contrast-to-noise (aCNR) ratios were calculated. Breast Imaging Reporting and Data System (BI-RADS) agreement between different sequence types was assessed. A total of 47 patients were included (mean age, 58±11 years). Acquisition time for T1<sub>DL</sub> and T2<sub>DL</sub> were reduced by 51% (44 vs. 90 s per dynamic phase) and 46% (102 vs. 192 s), respectively. T1<sub>DL</sub> and T2<sub>DL</sub> showed higher overall image quality (e.g., 4 [IQR, 4-4] for T1<sub>S</sub> vs. 5 [IQR, 5-5] for T1<sub>DL</sub>, P<0.001). Both, T1<sub>DL</sub> and T2<sub>DL</sub> revealed higher aSNR and aCNR than T1<sub>S</sub> and T2<sub>S</sub> (e.g., aSNR: 32.35±10.23 for T2<sub>S</sub> vs. 27.88±6.86 for T2<sub>DL</sub>, P=0.014). Cohen k agreement by BI-RADS assessment was excellent (0.962, P<0.001). DL for denoising and resolution upscaling reduces acquisition time and improves image quality for T1w and T2w breast MRI.
Page 41 of 1411405 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.