Sort by:
Page 1 of 19181 results
Next
You are viewing papers added to our database from 2025-08-25 to 2025-08-31.View all papers

Multi-regional Multiparametric Deep Learning Radiomics for Diagnosis of Clinically Significant Prostate Cancer.

Liu X, Liu R, He H, Yan Y, Zhang L, Zhang Q

pubmed logopapersAug 29 2025
Non-invasive and precise identification of clinically significant prostate cancer (csPCa) is essential for the management of prostatic diseases. Our study introduces a novel and interpretable diagnostic method for csPCa, leveraging multi-regional, multiparametric deep learning radiomics based on magnetic resonance imaging (MRI). The prostate regions, including the peripheral zone (PZ) and transition zone (TZ), are automatically segmented using a deep learning framework that combines convolutional neural networks and transformers to generate region-specific masks. Radiomics features are then extracted and selected from multiparametric MRI at the PZ, TZ, and their combined area to develop a multi-regional multiparametric radiomics diagnostic model. Feature contributions are quantified to enhance the model's interpretability and assess the importance of different imaging parameters across various regions. The multi-regional model substantially outperforms single-region models, achieving an optimal area under the curve (AUC) of 0.903 on the internal test set, and an AUC of 0.881 on the external test set. Comparison with other methods demonstrates that our proposed approach exhibits superior performance. Features from diffusion-weighted imaging and apparent diffusion coefficient play a crucial role in csPCa diagnosis, with contribution degrees of 53.28% and 39.52%, respectively. We introduce an interpretable, multi-regional, multiparametric diagnostic model for csPCa using deep learning radiomics. By integrating features from various zones, our model improves diagnostic accuracy and provides clear insights into the key imaging parameters, offering strong potential for clinical applications in csPCa management.

Incomplete Multi-modal Disentanglement Learning with Application to Alzheimer's Disease Diagnosis.

Han K, Hu D, Zhao F, Liu T, Yang F, Li G

pubmed logopapersAug 29 2025
Multi-modal neuroimaging data, including magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (PET), have greatly advanced the computer-aided diagnosis of Alzheimer's disease (AD) by providing shared and complementary information. However, the problem of incomplete multi-modal data remains inevitable and challenging. Conventional strategies that exclude subjects with missing data or synthesize missing scans either result in substantial sample reduction or introduce unwanted noise. To address this issue, we propose an Incomplete Multi-modal Disentanglement Learning method (IMDL) for AD diagnosis without missing scan synthesis, a novel model that employs a tiny Transformer to fuse incomplete multi-modal features extracted by modality-wise variational autoencoders adaptively. Specifically, we first design a cross-modality contrastive learning module to encourage modality-wise variational autoencoders to disentangle shared and complementary representations of each modality. Then, to alleviate the potential information gap between the representations obtained from complete and incomplete multi-modal neuroimages, we leverage the technique of adversarial learning to harmonize these representations with two discriminators. Furthermore, we develop a local attention rectification module comprising local attention alignment and multi-instance attention rectification to enhance the localization of atrophic areas associated with AD. This module aligns inter-modality and intra-modality attention within the Transformer, thus making attention weights more explainable. Extensive experiments conducted on ADNI and AIBL datasets demonstrated the superior performance of the proposed IMDL in AD diagnosis, and a further validation on the HABS-HD dataset highlighted its effectiveness for dementia diagnosis using different multi-modal neuroimaging data (i.e., T1-weighted MRI and diffusion tensor imaging).

Identifying key brain pathology in bipolar and unipolar depression using a region-specific brain aging trajectories approach: Insights from the Taiwan Aging and Mental Illness Cohort.

Zhu JD, Chi IJ, Hsu HY, Tsai SJ, Yang AC

pubmed logopapersAug 29 2025
Identifying key areas of brain dysfunction in mental illness is critical for developing precision diagnosis and treatment. This study aimed to develop region-specific brain aging trajectory prediction models using multimodal magnetic resonance imaging (MRI) to identify similarities and differences in abnormal aging between bipolar disorder (BD) and major depressive disorder (MDD) and pinpoint key brain regions of structural and functional change specific to each disorder. Neuroimaging data from 340 healthy controls, 110 BD participants, and 68 MDD participants were included from the Taiwan Aging and Mental Illness cohort. We constructed 228 models using T1-weighted MRI, resting-state functional MRI, and diffusion tensor imaging data. Gaussian process regression was used to train models for estimating brain aging trajectories using structural and functional maps across various brain regions. Our models demonstrated robust performance, revealing accelerated aging in 66 gray matter regions in BD and 67 in MDD, with 13 regions common to both disorders. The BD group showed accelerated aging in 17 regions on functional maps, whereas no such regions were found in MDD. Fractional anisotropy analysis identified 43 aging white matter tracts in BD and 39 in MDD, with 16 tracts common to both disorders. Importantly, there were also unique brain regions with accelerated aging specific to each disorder. These findings highlight the potential of brain aging trajectories as biomarkers for BD and MDD, offering insights into distinct and overlapping neuroanatomical changes. Incorporating region-specific changes in brain structure and function over time could enhance the understanding and treatment of mental illness.

Age- and sex-related changes in proximal humeral volumetric BMD assessed via chest CT with a deep learning-based segmentation model.

Li S, Tang C, Zhang H, Ma C, Weng Y, Chen B, Xu S, Xu H, Giunchiglia F, Lu WW, Guo D, Qin Y

pubmed logopapersAug 29 2025
Accurate assessment of proximal humeral volumetric bone mineral density (vBMD) is essential for surgical planning in shoulder pathology. However, age-related changes in proximal humeral vBMD remain poorly characterized. This study developed a deep learning-based method to assess proximal humeral vBMD and identified sex-specific age-related changes. It also demonstrated that lumbar spine vBMD is not a valid substitute. This study aimed to develop a deep learning-based method for proximal humeral vBMD assessment and to investigate its age- and sex-related changes, as well as its correlation with lumbar spine vBMD. An nnU-Net-based deep learning pipeline was developed to automatically segment the proximal humerus on chest CT scans from 2,675 adults. Segmentation performance was assessed using the Dice Similarity Coefficient (DSC), Intersection over Union (IoU), 95th-percentile Hausdorff Distance (95HD), and Average Symmetric Surface Distance (ASSD). Phantom-calibrated vBMD-total, trabecular, and BMAT-corrected trabecular-was quantified for each subject. Age-related distributions were modeled with generalized additive models for location, scale, and shape (GAMLSS) to generate sex-specific P3-P97 percentile curves. Lumbar spine vBMD was measured in 1460 individuals for correlation analysis. Segmentation was highly accurate (DSC 98.42 ± 0.20%; IoU 96.89 ± 0.42%; 95HD 1.12 ± 0.37 mm; ASSD 0.94 ± 0.31 mm). In males, total, trabecular, and BMAT-corrected trabecular vBMD declined approximately linearly from early adulthood. In females, a pronounced inflection occurred at ~ 40-45 years: values were stable or slightly rising beforehand, then all percentiles dropped steeply and synchronously, indicating accelerated menopause-related loss. In females, vBMD declined earlier in the lumbar spine than in the proximal humerus. Correlations between proximal humeral and lumbar spine vBMD were low to moderate overall and weakened after age 50. We present a novel, automated method for quantifying proximal humeral vBMD from chest CT, revealing distinct, sex-specific aging patterns. Males' humeral vBMD declines linearly, while females experience an earlier, accelerated loss. Moreover, the peak humeral vBMD in females occurs later than that of the lumbar spine, and spinal measurements cannot reliably substitute for humeral BMD in clinical assessment.

Artificial intelligence software to detect small hepatic lesions on hepatobiliary-phase images using multiscale sampling.

Maeda S, Nakamura Y, Higaki T, Karasudani A, Yamaguchi T, Ishihara M, Baba T, Kondo S, Fonseca D, Awai K

pubmed logopapersAug 29 2025
To investigate the effect of multiscale sampling artificial intelligence (msAI) software adapted to small hepatic lesions on the diagnostic performance of readers interpreting gadoxetic acid-enhanced hepatobiliary-phase (HBP) images. HBP images of 30 patients harboring 186 hepatic lesions were included. Three board-certified radiologists, 9 radiology residents, and 2 general physicians interpreted HBP image data sets twice, once with and once without the msAI software at 2-week intervals. Jackknife free-response receiver-operating characteristic analysis was performed to calculate the figure of merit (FOM) for detecting hepatic lesions. The negative consultation ratio (NCR), percentage of correct diagnoses turning into incorrect by the AI software, was calculated. We defined readers whose NCR was lower than 10% as those correctly diagnosed the false findings presented by the software. The msAI software significantly improved the lesion localization fraction (LLF) for all readers (0.74 vs 0.82, p < 0.01); the FOM did not (0.76 vs 0.78, p = 0.45). In lesion-size-based subgroup analysis, the LLF (0.40 vs 0.53, p < 0.01) improved significantly with the AI software even for lesions smaller than 6 mm, whereas the FOM (0.63 vs 0.66, p = 0.51) showed no significant difference. Among 10 readers with an NCR lower than 10%, not only the LLF but also the FOM were significantly better with the software (LLF 0.77 vs 0.82, FOM 0.79 vs 0.84, both p < 0.01). The detectability of small hepatic lesions on HBP images was improved with msAI software especially when its results were properly evaluated.

Clinical Consequences of Deep Learning Image Reconstruction at CT.

Lubner MG, Pickhardt PJ, Toia GV, Szczykutowicz TP

pubmed logopapersAug 29 2025
Deep learning reconstruction (DLR) offers a variety of advantages over the current standard iterative reconstruction techniques, including decreased image noise without changes in noise texture and less susceptibility to spatial resolution limitations at low dose. These advances may allow for more aggressive dose reduction in CT imaging while maintaining image quality and diagnostic accuracy. However, performance of DLRs is impacted by the type of framework and training data used. In addition, the patient size and clinical task being performed may impact the amount of dose reduction that can be reasonably employed. Multiple DLRs are currently FDA approved with a growing body of literature evaluating performance throughout this body; however, continued work is warranted to evaluate a variety of clinical scenarios to fully explore the evolving potential of DLR. Depending on the type and strength of DLR applied, blurring and occasionally other artifacts may be introduced. DLRs also show promise in artifact reduction, particularly metal artifact reduction. This commentary focuses primarily on current DLR data for abdominal applications, current challenges, and future areas of potential exploration.

A hybrid computer vision model to predict lung cancer in diverse populations

Zakkar, A., Perwaiz, N., Harikrishnan, V., Zhong, W., Narra, V., Krule, A., Yousef, F., Kim, D., Burrage-Burton, M., Lawal, A. A., Gadi, V., Korpics, M. C., Kim, S. J., Chen, Z., Khan, A. A., Molina, Y., Dai, Y., Marai, E., Meidani, H., Nguyen, R., Salahudeen, A. A.

medrxiv logopreprintAug 29 2025
PURPOSE Disparities of lung cancer incidence exist in Black populations and screening criteria underserve Black populations due to disparately elevated risk in the screening eligible population. Prediction models that integrate clinical and imaging-based features to individualize lung cancer risk is a potential means to mitigate these disparities. PATIENTS AND METHODS This Multicenter (NLST) and catchment population based (UIH, urban and suburban Cook County) study utilized participants at risk of lung cancer with available lung CT imaging and follow up between the years 2015 and 2024. 53,452 in NLST and 11,654 in UIH were included based on age and tobacco use based risk factors for lung cancer. Cohorts were used for training and testing of deep and machine learning models using clinical features alone or combined with CT image features (hybrid computer vision). RESULTS An optimized 7 clinical feature model achieved ROC-AUC values ranging 0.64-0.67 in NLST and 0.60-0.65 in UIH cohorts across multiple years. Incorporation of imaging features to form a hybrid computer vision model significantly improved ROC-AUC values to 0.78-0.91 in NLST but deteriorated in UIH with ROC-AUC values of 0.68- 0.80, attributable to Black participants where ROC-AUC values ranged from 0.63-0.72 across multiple years. Retraining the hybrid computer vision model by incorporating Black and other participants from the UIH cohort improved performance with ROC- AUC values of 0.70-0.87 in a held out UIH test set. CONCLUSION Hybrid computer vision predicted risk with improved accuracy compared to clinical risk models alone. However, potential biases in image training data reduced model generalizability in Black participants. Performance was improved upon retraining with a subset of the UIH cohort, suggesting that inclusive training and validation datasets can minimize racial disparities. Future studies incorporating vision models trained on representative data sets may demonstrate improved health equity upon clinical use.

Proteogenomic Biomarker Profiling for Predicting Radiolabeled Immunotherapy Response in Resistant Prostate Cancer.

Yan B, Gao Y, Zou Y, Zhao L, Li Z

pubmed logopapersAug 29 2025
Treatment resistance prevents patients with preoperative chemoradiotherapy or targeted radiolabeled immunotherapy from achieving a good result, which remains a major challenge in the prostate cancer (PCa) area. A novel integrative framework combining a machine learning workflow with proteogenomic profiling was used to identify predictive ultrasound biomarkers and classify patient response to radiolabeled immunotherapy in high-risk PCa patients who are treatment resistant. The deep stacked autoencoder (DSAE) model, combined with Extreme Gradient Boosting, was designed for feature refinement and classification. The Cancer Genome Atlas and an independent radiotherapy-treated cohort have been utilized to collect multiomics data through their respective applications. In addition to genetic mutations (whole-exome sequencing), these data contained proteomic (mass spectrometry) and transcriptomic (RNA sequencing) data. Maintaining biological variety across omics layers while reducing the dimensionality of the data requires the use of the DSAE architecture. Resistance phenotypes show a notable relationship with proteogenomic profiles, including DNA repair pathways (Breast Cancer gene 2 [BRCA2], ataxia-telangiectasia mutated [ATM]), androgen receptor (AR) signaling regulators, and metabolic enzymes (ATP citrate lyase [ACLY], isocitrate dehydrogenase 1 [IDH1]). A specific panel of ultrasound biomarkers has been confirmed in a state deemed preclinical using patient-derived xenografts. To support clinical translation, real-time phenotypic features from ultrasound imaging (e.g., perfusion, stiffness) were also considered, providing complementary insights into the tumor microenvironment and treatment responsiveness. This approach provides an integrated platform that offers a clinically actionable foundation for the development of radiolabeled immunotherapy drugs before surgical operations.

Sex-Specific Prognostic Value of Automated Epicardial Adipose Tissue Quantification on Serial Lung Cancer Screening Chest CT.

Brendel JM, Mayrhofer T, Hadzic I, Norton E, Langenbach IL, Langenbach MC, Jung M, Raghu VK, Nikolaou K, Douglas PS, Lu MT, Aerts HJWL, Foldyna B

pubmed logopapersAug 29 2025
Epicardial adipose tissue (EAT) is a metabolically active fat depot associated with coronary atherosclerosis and cardiovascular (CV) risk. While EAT is a known prognostic marker in lung cancer screening, its sex-specific prognostic value remains unclear. This study investigated sex differences in the prognostic utility of serial EAT measurements on low-dose chest CTs. We analyzed baseline and two-year changes in EAT volume and density using a validated automated deep-learning algorithm in 24,008 heavy-smoking participants from the National Lung Screening Trial (NLST). Sex-stratified multivariable Cox models, adjusted for CV risk factors, BMI, and coronary artery calcium (CAC), assessed associations between EAT and all-cause and CV mortality (median follow-up 12.3 years [IQR: 11.9-12.8], 4,668 [19.4%] all-cause deaths, 1,083 [4.5%] CV deaths).Women (n = 9,841; 41%) were younger, with fewer CV risk factors, lower BMI, fewer pack-years, and lower CAC than men (all P < 0.001). Baseline EAT was associated with similar all-cause and CV mortality risk in both sexes (max. aHR women: 1.70; 95%-CI: 1.13-2.55; men: 1.83; 95%-CI: 1.40-2.40, P-interaction=0.986). However, two-year EAT changes predicted CV death only in women (aHR: 1.82; 95%-CI: 1.37-2.49, P < 0.001), and showed a stronger association with all-cause mortality in women (aHR: 1.52; 95%-CI: 1.31-1.77) than in men (aHR: 1.26; 95%-CI: 1.13-1.40, P-interaction=0.041). In this large lung cancer screening cohort, serial EAT changes independently predicted CV mortality in women and were more strongly associated with all-cause mortality in women than in men. These findings support routine EAT quantification on chest CT for improved, sex-specific cardiovascular risk stratification.

Flow Matching-Based Data Synthesis for Robust Anatomical Landmark Localization.

Hadzic A, Bogensperger L, Berghold A, Urschler M

pubmed logopapersAug 29 2025
Anatomical landmark localization (ALL) plays a crucial role in medical imaging for applications such as therapy planning and surgical interventions. State-ofthe- art deep learning methods for ALL are often trained on small datasets due to the scarcity of large, annotated medical data. This constraint often leads to overfitting on the training dataset, which in turn reduces the model's ability to generalize to unseen data. To address these challenges, we propose a multi-channel generative approach utilizing Flow Matching to synthesize diverse annotated images for data augmentation in ALL tasks. Each synthetically generated sample consists of a medical image paired with a multi-channel heatmap that encodes its landmark configuration, from which the corresponding landmark annotations can be derived. We assess the quality of synthetic image-heatmap pairs automatically using a Statistical Shape Model to evaluate landmark plausibility and compute the Fréchet Inception Distance score to quantify image quality. Our results show that pairs synthesized via Flow Matching exhibit superior quality and diversity compared with those generated by other state-of-the-art generative models like Generative Adversarial Networks or diffusion models. Furthermore, we investigate the effect of integrating synthetic data into the training process of an ALL network. In our experiments, the ALL network trained with Flow Matching-generated data demonstrates improved robustness, particularly in scenarios with limited training data or occlusions, compared with baselines that utilize solely real images or synthetic data from alternative generative models.
Page 1 of 19181 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.