Sort by:
Page 101 of 2252246 results

Cardiac Phase Estimation Using Deep Learning Analysis of Pulsed-Mode Projections: Toward Autonomous Cardiac CT Imaging.

Wu P, Haneda E, Pack JD, Heukensfeldt Jansen I, Hsiao A, McVeigh E, De Man B

pubmed logopapersJun 1 2025
Cardiac CT plays an important role in diagnosing heart diseases but is conventionally limited by its complex workflow that requires dedicated phase and bolus tracking devices [e.g., electrocardiogram (ECG) gating]. This work reports first progress towards robust and autonomous cardiac CT exams through joint deep learning (DL) and analytical analysis of pulsed-mode projections (PMPs). To this end, cardiac phase and its uncertainty were simultaneously estimated using a novel projection domain cardiac phase estimation network (PhaseNet), which utilizes sliding-window multi-channel feature extraction strategy and a long short-term memory (LSTM) block to extract temporal correlation between time-distributed PMPs. An uncertainty-driven Viterbi (UDV) regularizer was developed to refine the DL estimations at each time point through dynamic programming. Stronger regularization was performed at time points where DL estimations have higher uncertainty. The performance of the proposed phase estimation pipeline was evaluated using accurate physics-based emulated data. PhaseNet achieved improved phase estimation accuracy compared to the competing methods in terms of RMSE (~50% improvement vs. standard CNN-LSTM; ~24% improvement vs. multi-channel residual network). The added UDV regularizer resulted in an additional ~14% improvement in RMSE, achieving accurate phase estimation with <6% RMSE in cardiac phase (phase ranges from 0-100%). To our knowledge, this is the first publication of prospective cardiac phase estimation in the projection domain. Combined with our previous work on PMP-based bolus curve estimation, the proposed method could potentially be used to achieve autonomous cardiac scanning without ECG device and expert-in-the-loop bolus timing.

Coarse for Fine: Bounding Box Supervised Thyroid Ultrasound Image Segmentation Using Spatial Arrangement and Hierarchical Prediction Consistency.

Chi J, Lin G, Li Z, Zhang W, Chen JH, Huang Y

pubmed logopapersJun 1 2025
Weakly-supervised learning methods have become increasingly attractive for medical image segmentation, but suffered from a high dependence on quantifying the pixel-wise affinities of low-level features, which are easily corrupted in thyroid ultrasound images, resulting in segmentation over-fitting to weakly annotated regions without precise delineation of target boundaries. We propose a dual-branch weakly-supervised learning framework to optimize the backbone segmentation network by calibrating semantic features into rational spatial distribution under the indirect, coarse guidance of the bounding box mask. Specifically, in the spatial arrangement consistency branch, the maximum activations sampled from the preliminary segmentation prediction and the bounding box mask along the horizontal and vertical dimensions are compared to measure the rationality of the approximate target localization. In the hierarchical prediction consistency branch, the target and background prototypes are encapsulated from the semantic features under the combined guidance of the preliminary segmentation prediction and the bounding box mask. The secondary segmentation prediction induced from the prototypes is compared with the preliminary prediction to quantify the rationality of the elaborated target and background semantic feature perception. Experiments on three thyroid datasets illustrate that our model outperforms existing weakly-supervised methods for thyroid gland and nodule segmentation and is comparable to the performance of fully-supervised methods with reduced annotation time. The proposed method has provided a weakly-supervised segmentation strategy by simultaneously considering the target's location and the rationality of target and background semantic features distribution. It can improve the applicability of deep learning based segmentation in the clinical practice.

Enhancing Pathological Complete Response Prediction in Breast Cancer: The Added Value of Pretherapeutic Contrast-Enhanced Cone Beam Breast CT Semantic Features.

Wang Y, Ma Y, Wang F, Liu A, Zhao M, Bian K, Zhu Y, Yin L, Ye Z

pubmed logopapersJun 1 2025
To explore the association between pretherapeutic contrast-enhanced cone beam breast CT (CE-CBBCT) features and pathological complete response (pCR), and to develop a predictive model that integrates clinicopathological and imaging features. In this prospective study, a cohort of 200 female patients who underwent CE-CBBCT prior to neoadjuvant therapy and surgery was divided into train (n=150) and test (n=50) sets in a 3:1 ratio. Optimal predictive features were identified using univariate logistic regression and recursive feature elimination with cross-validation (RFECV). Models were constructed using XGBoost and evaluated through the receiver operating characteristic (ROC) curve, calibration curves, and decision curve analysis. The performance of combined model was further evaluated across molecular subtypes. Feature significance within the combined model was determined using the SHapley Additive exPlanation (SHAP) algorithm. The model incorporating three clinicopathological and six CE-CBBCT imaging features demonstrated robust predictive performance for pCR, with area under curves (AUCs) of 0.924 in the train set and 0.870 in the test set. Molecular subtype, spiculation, and adjacent vascular sign (AVS) grade emerged as the most influential SHAP features. The highest AUCs were observed for HER2-positive subgroup (train: 0.935; test: 0.844), followed by luminal (train: 0.841; test: 0.717) and triple-negative breast cancer (TNBC; train: 0.760; test: 0.583). SHAP analysis indicated that spiculation was crucial for luminal breast cancer prediction, while AVS grade was critical for HER2-positive and TNBC cases. Integrating clinicopathological and CE-CBBCT imaging features enhanced pCR prediction accuracy, particularly in HER2-positive cases, underscoring its potential clinical applicability.

A Multimodal Model Based on Transvaginal Ultrasound-Based Radiomics to Predict the Risk of Peritoneal Metastasis in Ovarian Cancer: A Multicenter Study.

Zhou Y, Duan Y, Zhu Q, Li S, Zhang C

pubmed logopapersJun 1 2025
This study aimed to develop a predictive model for peritoneal metastasis (PM) in ovarian cancer using a combination radiomics and clinical biomarkers to improve diagnostic accuracy. This retrospective cohort study of 619 ovarian cancer patients involved demographic data, radiomics, O-RADS standardized description, clinical biomarkers, and histological findings. Radiomics features were extracted using 3D Slicer and Pyradiomics, with selective feature extraction using Least Absolute Shrinkage and Selection Operator regression. Model development and validation were carried out using logistic regression and machine learning methods RESULTS: Interobserver agreement was high for radiomics features, with 1049 features initially extracted and 7 features selected through regression analysis. Multi-modal information such as Ascites, Fallopian tube invasion, Greatest diameter, HE4 and D-dimer levels were significant predictors of PM. The developed radiomics nomogram demonstrated strong discriminatory power, with AUC values of 0.912, 0.883, and 0.831 in the training, internal test, and external test sets respectively. The nomogram displayed superior diagnostic performance compared to single-modality models. The integration of multimodal information in a predictive model for PM in ovarian cancer shows promise for enhancing diagnostic accuracy and guiding personalized treatment. This multi-modal approach offers a potential strategy for improving patient outcomes in ovarian cancer management with PM.

Accelerated High-resolution T1- and T2-weighted Breast MRI with Deep Learning Super-resolution Reconstruction.

Mesropyan N, Katemann C, Leutner C, Sommer A, Isaak A, Weber OM, Peeters JM, Dell T, Bischoff L, Kuetting D, Pieper CC, Lakghomi A, Luetkens JA

pubmed logopapersJun 1 2025
To assess the performance of an industry-developed deep learning (DL) algorithm to reconstruct low-resolution Cartesian T1-weighted dynamic contrast-enhanced (T1w) and T2-weighted turbo-spin-echo (T2w) sequences and compare them to standard sequences. Female patients with indications for breast MRI were included in this prospective study. The study protocol at 1.5 Tesla MRI included T1w and T2w. Both sequences were acquired in standard resolution (T1<sub>S</sub> and T2<sub>S</sub>) and in low-resolution with following DL reconstructions (T1<sub>DL</sub> and T2<sub>DL</sub>). For DL reconstruction, two convolutional networks were used: (1) Adaptive-CS-Net for denoising with compressed sensing, and (2) Precise-Image-Net for resolution upscaling of previously downscaled images. Overall image quality was assessed using 5-point-Likert scale (from 1=non-diagnostic to 5=excellent). Apparent signal-to-noise (aSNR) and contrast-to-noise (aCNR) ratios were calculated. Breast Imaging Reporting and Data System (BI-RADS) agreement between different sequence types was assessed. A total of 47 patients were included (mean age, 58±11 years). Acquisition time for T1<sub>DL</sub> and T2<sub>DL</sub> were reduced by 51% (44 vs. 90 s per dynamic phase) and 46% (102 vs. 192 s), respectively. T1<sub>DL</sub> and T2<sub>DL</sub> showed higher overall image quality (e.g., 4 [IQR, 4-4] for T1<sub>S</sub> vs. 5 [IQR, 5-5] for T1<sub>DL</sub>, P<0.001). Both, T1<sub>DL</sub> and T2<sub>DL</sub> revealed higher aSNR and aCNR than T1<sub>S</sub> and T2<sub>S</sub> (e.g., aSNR: 32.35±10.23 for T2<sub>S</sub> vs. 27.88±6.86 for T2<sub>DL</sub>, P=0.014). Cohen k agreement by BI-RADS assessment was excellent (0.962, P<0.001). DL for denoising and resolution upscaling reduces acquisition time and improves image quality for T1w and T2w breast MRI.

Scale-Aware Super-Resolution Network With Dual Affinity Learning for Lesion Segmentation From Medical Images.

Luo L, Li Y, Chai Z, Lin H, Heng PA, Chen H

pubmed logopapersJun 1 2025
Convolutional neural networks (CNNs) have shown remarkable progress in medical image segmentation. However, the lesion segmentation remains a challenge to state-of-the-art CNN-based algorithms due to the variance in scales and shapes. On the one hand, tiny lesions are hard to delineate precisely from the medical images which are often of low resolutions. On the other hand, segmenting large-size lesions requires large receptive fields, which exacerbates the first challenge. In this article, we present a scale-aware super-resolution (SR) network to adaptively segment lesions of various sizes from low-resolution (LR) medical images. Our proposed network contains dual branches to simultaneously conduct lesion mask SR (LMSR) and lesion image SR (LISR). Meanwhile, we introduce scale-aware dilated convolution (SDC) blocks into the multitask decoders to adaptively adjust the receptive fields of the convolutional kernels according to the lesion sizes. To guide the segmentation branch to learn from richer high-resolution (HR) features, we propose a feature affinity (FA) module and a scale affinity (SA) module to enhance the multitask learning of the dual branches. On multiple challenging lesion segmentation datasets, our proposed network achieved consistent improvements compared with other state-of-the-art methods. Code will be available at: https://github.com/poiuohke/SASR_Net.

Classification of differentially activated groups of fibroblasts using morphodynamic and motile features.

Kang M, Min C, Devarasou S, Shin JH

pubmed logopapersJun 1 2025
Fibroblasts play essential roles in cancer progression, exhibiting activation states that can either promote or inhibit tumor growth. Understanding these differential activation states is critical for targeting the tumor microenvironment (TME) in cancer therapy. However, traditional molecular markers used to identify cancer-associated fibroblasts are limited by their co-expression across multiple fibroblast subtypes, making it difficult to distinguish specific activation states. Morphological and motility characteristics of fibroblasts reflect their underlying gene expression patterns and activation states, making these features valuable descriptors of fibroblast behavior. This study proposes an artificial intelligence-based classification framework to identify and characterize differentially activated fibroblasts by analyzing their morphodynamic and motile features. We extract these features from label-free live-cell imaging data of fibroblasts co-cultured with breast cancer cell lines using deep learning and machine learning algorithms. Our findings show that morphodynamic and motile features offer robust insights into fibroblast activation states, complementing molecular markers and overcoming their limitations. This biophysical state-based cellular classification framework provides a novel, comprehensive approach for characterizing fibroblast activation, with significant potential for advancing our understanding of the TME and informing targeted cancer therapies.

Neuroimaging and machine learning in eating disorders: a systematic review.

Monaco F, Vignapiano A, Di Gruttola B, Landi S, Panarello E, Malvone R, Palermo S, Marenna A, Collantoni E, Celia G, Di Stefano V, Meneguzzo P, D'Angelo M, Corrivetti G, Steardo L

pubmed logopapersJun 1 2025
Eating disorders (EDs), including anorexia nervosa (AN), bulimia nervosa (BN), and binge eating disorder (BED), are complex psychiatric conditions with high morbidity and mortality. Neuroimaging and machine learning (ML) represent promising approaches to improve diagnosis, understand pathophysiological mechanisms, and predict treatment response. This systematic review aimed to evaluate the application of ML techniques to neuroimaging data in EDs. Following PRISMA guidelines (PROSPERO registration: CRD42024628157), we systematically searched PubMed and APA PsycINFO for studies published between 2014 and 2024. Inclusion criteria encompassed human studies using neuroimaging and ML methods applied to AN, BN, or BED. Data extraction focused on study design, imaging modalities, ML techniques, and performance metrics. Quality was assessed using the GRADE framework and the ROBINS-I tool. Out of 185 records screened, 5 studies met the inclusion criteria. Most applied support vector machines (SVMs) or other supervised ML models to structural MRI or diffusion tensor imaging data. Cortical thickness alterations in AN and diffusion-based metrics effectively distinguished ED subtypes. However, all studies were observational, heterogeneous, and at moderate to serious risk of bias. Sample sizes were small, and external validation was lacking. ML applied to neuroimaging shows potential for improving ED characterization and outcome prediction. Nevertheless, methodological limitations restrict generalizability. Future research should focus on larger, multicenter, and multimodal studies to enhance clinical applicability. Level IV, multiple observational studies with methodological heterogeneity and moderate to serious risk of bias.

Deep Learning to Localize Photoacoustic Sources in Three Dimensions: Theory and Implementation.

Gubbi MR, Bell MAL

pubmed logopapersJun 1 2025
Surgical tool tip localization and tracking are essential components of surgical and interventional procedures. The cross sections of tool tips can be considered as acoustic point sources to achieve these tasks with deep learning applied to photoacoustic channel data. However, source localization was previously limited to the lateral and axial dimensions of an ultrasound transducer. In this article, we developed a novel deep learning-based 3-D photoacoustic point source localization system using an object detection-based approach extended from our previous work. In addition, we derived theoretical relationships among point source locations, sound speeds, and waveform shapes in raw photoacoustic channel data frames. We then used this theory to develop a novel deep learning instance segmentation-based 3-D point source localization system. When tested with 4000 simulated, 993 phantom, and 1983 ex vivo channel data frames, the two systems achieved F1 scores as high as 99.82%, 93.05%, and 98.20%, respectively, and Euclidean localization errors (mean ± one standard deviation) as low as ${1.46} \; \pm \; {1.11}$ mm, ${1.58} \; \pm \; {1.30}$ mm, and ${1.55} \; \pm \; {0.86}$ mm, respectively. In addition, the instance segmentation-based system simultaneously estimated sound speeds with absolute errors (mean ± one standard deviation) of ${19.22} \; \pm \; {26.26}$ m/s in simulated data and standard deviations ranging 14.6-32.3 m/s in experimental data. These results demonstrate the potential of the proposed photoacoustic imaging-based methods to localize and track tool tips in three dimensions during surgical and interventional procedures.

GAN Inversion for Data Augmentation to Improve Colonoscopy Lesion Classification.

Golhar MV, Bobrow TL, Ngamruengphong S, Durr NJ

pubmed logopapersJun 1 2025
A major challenge in applying deep learning to medical imaging is the paucity of annotated data. This study explores the use of synthetic images for data augmentation to address the challenge of limited annotated data in colonoscopy lesion classification. We demonstrate that synthetic colonoscopy images generated by Generative Adversarial Network (GAN) inversion can be used as training data to improve polyp classification performance by deep learning models. We invert pairs of images with the same label to a semantically rich and disentangled latent space and manipulate latent representations to produce new synthetic images. These synthetic images maintain the same label as the input pairs. We perform image modality translation (style transfer) between white light and narrow-band imaging (NBI). We also generate realistic synthetic lesion images by interpolating between original training images to increase the variety of lesion shapes in the training dataset. Our experiments show that GAN inversion can produce multiple colonoscopy data augmentations that improve the downstream polyp classification performance by 2.7% in F1-score and 4.9% in sensitivity over other methods, including state-of-the-art data augmentation. Testing on unseen out-of-domain data also showcased an improvement of 2.9% in F1-score and 2.7% in sensitivity. This approach outperforms other colonoscopy data augmentation techniques and does not require re-training multiple generative models. It also effectively uses information from diverse public datasets, even those not specifically designed for the targeted downstream task, resulting in strong domain generalizability. Project code and model: https://github.com/DurrLab/GAN-Inversion.
Page 101 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.