Sort by:
Page 217 of 6546537 results

Watts, D., Mallard, T. T., Dall' Aglio, L., Giangrande, E., Kennedy, C., Cai, N., Choi, K. W., Ge, T., Smoller, J.

medrxiv logopreprintAug 29 2025
Major depressive disorder (MDD) affects millions worldwide, yet its neurobiological underpinnings remain elusive. Neuroimaging studies have yielded inconsistent results, hindered by small sample sizes and heterogeneous depression definitions. We sought to address these limitations by leveraging the UK Biobanks extensive neuroimaging data (n=30,122) to investigate how depression phenotyping depth influences neuroanatomic profiles of MDD. We examined 256 brain structural features, obtained from T1- and diffusion-weighted brain imaging, and nine depression phenotypes, ranging from self-reported symptoms (shallow definitions) to clinical diagnoses (deep). Multivariable logistic regression, machine learning classifiers, and feature transfer approaches were used to explore correlational patterns, predictive accuracy and the transferability of important features across depression definitions. For white matter microstructure, we observed widespread fractional anisotropy decreases and mean diffusivity increases. In contrast, cortical thickness and surface area were less consistently associated across depression definitions, and demonstrated weaker associations. Machine learning classifiers showed varying performance in distinguishing depression cases from controls, with shallow phenotypes achieving similar discriminative performance (AUC=0.807) and slightly higher positive predictive value (PPV=0.655) compared to deep phenotypes (AUC=0.831, PPV=0.456), when sensitivity was standardized at 80%. However, when shallow phenotypes were downsampled to match deep phenotype case/control ratios, performance degraded substantially (AUC=0.690). Together, these results suggest that while core white-matter alterations are shared across phenotyping strategies, shallow phenotypes require approximately twice the sample size of deep phenotypes to achieve comparable classification performance, underscoring the fundamental power-specificity tradeoff in psychiatric neuroimaging research.

Denisova E, Francia P, Nardi C, Bocchi L

pubmed logopapersAug 28 2025
A recent investigation into deep learning-based denoising for early Monte Carlo (MC) Path Tracing in computed tomography (CT) volume visualization yielded promising quantitative outcomes but inconsistent qualitative assessments. This research probes the underlying causes of this incongruity by deploying a web-based SurveyMonkey questionnaire distributed among healthcare professionals. The survey targeted radiologists, residents, orthopedic surgeons, and veterinarians, leveraging the authors' professional networks for dissemination. To evaluate perceptions, the questionnaire featured randomized sections gauging attitudes towards AI-enhanced image and video quality, confidence in reference images, and clinical applicability. Seventy-four participants took part, encompassing a spectrum of experience levels: <1 year (n=11), 1-3 years (n=27), 3-5 years (n=12), and >5 years (n=24). A substantial majority (77%) expressed a preference for AI-enhanced images over traditional MC estimates, a preference influenced by participant experience (adjusted OR 0.81, 95% CI 0.67-0.98, p=0.033). Experience correlates with confidence in AI-generated images (adjusted OR 0.98, 95% CI 0.95-1, p=0.018-0.047) and satisfaction with video previews, both with and without AI (adjusted OR 0.96-0.98, 95% CI 0.92-1, p = 0.033-0.048). Significant monotonic relationships emerged between experience, confidence (σ= 0.25-0.26, p = 0.025-0.029), and satisfaction (σ= 0.23-0.24, p = 0.037-0.046). The findings underscore the potential of AI post-processing to improve the rendering of biomedical volumes, noting enhanced confidence and satisfaction among experienced participants. The study reveals that participants' preferences may not align perfectly with quality metrics such as peak signal-to-noise ratio and structural similarity index, highlighting nuances in evaluating AI's qualitative impact on CT image denoising.

Gartland CN, Healy J, Lynham RS, Nowlan NC, Green C, Redmond SJ

pubmed logopapersAug 28 2025
Developmental dysplasia of the hip (DDH), a developmental deformity with an incidence of 0.1-3.4%, lacks an objective and reliable definition and assessment metric by which to conduct timely diagnosis. This work aims to address this challenge by developing a system of analysis to accurately detect 22 key anatomical landmarks in anteroposterior pelvic radiographs of the juvenile hip, from which a range of novel salient morphological measures can be derived. A coarse-to-fine approach was implemented, with six model variations of the U-Net deep neural network architecture compared for the coarse model and four variations for the fine model; model variations included differences in data augmentation applied, image input size, network attention gates, and loss function design. The best performing combination achieved a root-mean-square error in the positional accuracy of landmark detection of 3.79 mm with a bias and precision in the x-direction of 0.03 ± 17.6 mm and y-direction of 1.76 ± 22.5 mm in the image frame of reference. Average errors for each morphological metric are in line with the performance of clinical experts. Future work will use this system to perform a population analysis to accurately characterize hip joint morphology and develop an objective and reliable assessment metric for DDH.

Sowa H, Yagi H, Ueda K, Hashimoto M, Karasaki K, Liu Q, Kurozumi A, Adachi Y, Yanase T, Okamura S, Zhai B, Takeda N, Ando M, Yamauchi H, Ito N, Ono M, Akazawa H, Komuro I

pubmed logopapersAug 28 2025
Inflammation plays important roles in the pathogenesis of vascular diseases. We here show the involvement of perivascular inflammation in aortic dilatation of Marfan syndrome (MFS). In the aorta of MFS patients and Fbn1C1041G/+ mice, macrophages markedly accumulated in periaortic tissues with increased inflammatory cytokine expression. Metabolic inflammatory stress induced by a high-fat diet (HFD) enhanced vascular inflammation predominantly in periaortic tissues and accelerated aortic dilatation in Fbn1C1041G/+ mice, both of which were inhibited by low-dose pitavastatin. HFD feeding also intensifies structural disorganization of the tunica media in Fbn1C1041G/+ mice, including elastic fiber fragmentation, fibrosis, and proteoglycan accumulation, along with increased activation of TGF-β downstream targets. Pitavastatin treatment mitigated these alterations. For non-invasive assessment of PVAT inflammation in a clinical setting, we developed an automated analysis program for CT images using machine learning techniques to calculate the perivascular fat attenuation index of the ascending aorta (AA-FAI), correlating with periaortic fat inflammation. The AA-FAI was significantly higher in patients with MFS compared to patients without hereditary connective tissue disorders. These results suggest that perivascular inflammation contributes to aneurysm formation in MFS and might be a potential target for preventing and treating vascular events in MFS.

Chen J, Mirvis M, Ekman A, Vanslembrouck B, Gros ML, Larabell C, Marshall WF

pubmed logopapersAug 28 2025
Soft X-ray tomography (SXT) is an invaluable tool for quantitatively analyzing cellular structures at sub-optical isotropic resolution. However, it has traditionally depended on manual segmentation, limiting its scalability for large datasets. Here, we leverage a deep learning-based auto-segmentation pipeline to segment and label cellular structures in hundreds of cells across three <i>Saccharomyces cerevisiae</i> strains. This task-based pipeline employs manual iterative refinement to improve segmentation accuracy for key structures, including the cell body, nucleus, vacuole, and lipid droplets, enabling high-throughput and precise phenotypic analysis. Using this approach, we quantitatively compared the 3D whole-cell morphometric characteristics of wild-type, VPH1-GFP, and <i>vac14</i> strains, uncovering detailed strain-specific cell and organelle size and shape variations. We show the utility of SXT data for precise 3D curvature analysis of entire organelles and cells and detection of fine morphological features using surface meshes. Our approach facilitates comparative analyses with high spatial precision and statistical throughput, uncovering subtle morphological features at the single-cell and population level. This workflow significantly enhances our ability to characterize cell anatomy and supports scalable studies on the mesoscale, with applications in investigating cellular architecture, organelle biology, and genetic research across diverse biological contexts. [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text].

Yao J, Ahmad W, Cheng S, Costa AF, Ertl-Wagner BB, Nicolaou S, Souza C, Patlas MN

pubmed logopapersAug 28 2025
Radiology in Canada is evolving through a combination of clinical innovation, collaborative research and the adoption of advanced imaging technologies. This overview highlights contributions from selected academic centres across the country that are shaping diagnostic and interventional practice. At Dalhousie University, researchers have led efforts to improve contrast media safety, refine imaging techniques for hepatopancreatobiliary diseases, and develop peer learning programs that support continuous quality improvement. The University of Ottawa has made advances in radiomics, magnetic resonance imaging protocols, and virtual reality applications for surgical planning, while contributing to global research networks focused on evaluating LI-RADS performance. At the University of British Columbia, the implementation of photon-counting CT, dual-energy CT, and artificial intelligence tools is enhancing diagnostic precision in oncology, trauma, and stroke imaging. The Hospital for Sick Children is a leader in paediatric radiology, with work ranging from artificial intelligence (AI) brain tumour classification to innovations in foetal MRI and congenital heart disease imaging. Together, these initiatives reflect the strength and diversity of Canadian radiology, demonstrating a shared commitment to advancing patient care through innovation, data-driven practice and collaboration.

Jia L, Li Z, Huang G, Jiang H, Xu H, Zhao J, Li J, Lei J

pubmed logopapersAug 28 2025
To develop a CT-based deep learning model for predicting the macrotrabecular-massive (MTM) subtype of hepatocellular carcinoma (HCC) and to compare its diagnostic performance with machine learning models. We retrospectively collected contrast-enhanced CT data from patients diagnosed with HCC via histopathological examination between January 2019 and August 2023. These patients were recruited from two medical centers. All analyses were performed using two-dimensional regions of interest. We developed a novel deep learning network based on ResNet-50, named ResNet-ViT Contrastive Learning (RVCL). The RVCL model was compared against baseline deep learning models and machine learning models. Additionally, we developed a multimodal prediction model by integrating deep learning models with clinical parameters. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). A total of 368 patients (mean age, 56 ± 10; 285 [77%] male) from two institutions were retrospectively enrolled. Our RVCL model demonstrated superior diagnostic performance in predicting MTM (AUC = 0.93) on the external test set compared to the five baseline deep learning models (AUCs range 0.46-0.72, all p < 0.05) and the three machine learning models (AUCs range 0.49-0.60, all p < 0.05). However, integrating the clinical biomarker Alpha-Fetoprotein (AFP) into the RVCL model did not significant improvement in diagnostic performance (internal test data set: AUC 0.99 vs 0.95 [p = 0.08]; external test data set: AUC 0.98 vs 0.93 [p = 0.05]). The deep learning model based on contrast-enhanced CT can accurately predict the MTM subtype in HCC patients, offering a smart tool for clinical decision-making. The RVCL model introduces a transformative approach to the non-invasive diagnosis MTM subtype of HCC by harmonizing convolutional neural networks and vision transformers within a unified architecture. The RVCL model can accurately predict the MTM subtype. Deep learning outperforms machine learning for predicting MTM subtype. RVCL boosts accuracy and guides personalized therapy.

Benzakoun J, Scheldeman L, Wouters A, Cheng B, Ebinger M, Endres M, Fiebach JB, Fiehler J, Galinovic I, Muir KW, Nighoghossian N, Pedraza S, Puig J, Simonsen CZ, Thijs V, Thomalla G, Micard E, Chen B, Lapergue B, Boulouis G, Le Berre A, Baron JC, Turc G, Ben Hassen W, Naggara O, Oppenheim C, Lemmens R

pubmed logopapersAug 28 2025
In Acute Ischemic Stroke (AIS), mismatch between Diffusion-Weighted Imaging (DWI) and Fluid-Attenuated Inversion-Recovery (FLAIR) helps identify patients who can benefit from thrombolysis when stroke onset time is unknown (15% of AIS). However, visual assessment has suboptimal observer agreement. Our study aims to develop and validate a Deep-Learning model for predicting DWI-FLAIR mismatch using solely DWI data. This retrospective study included AIS patients from ETIS registry (derivation cohort, 2018-2024) and WAKE-UP trial (validation cohort, 2012-2017). DWI-FLAIR mismatch was rated visually. We trained a model to predict manually-labeled FLAIR visible areas (FVA) matching the DWI lesion on baseline and early follow-up MRIs, using only DWI as input. FVA-index was defined as the volume of predicted regions. Area under the ROC curve (AUC) and optimal FVA-index cutoff to predict DWI-FLAIR mismatch in the derivation cohort were computed. Validation was performed using baseline MRIs of the validation cohort. The derivation cohort included 3605 MRIs in 2922 patients and the validation cohort 844 MRIs in 844 patients. FVA-index demonstrated strong predictive value for DWI-FLAIR mismatch in baseline MRIs from the derivation (<i>n</i> = 2453, AUC = 0.85, 95%CI: 0.84-0.87) and validation cohort (<i>n</i> = 844, AUC = 0.86, 95%CI: 0.84-0.89). With an optimal FVA-index cutoff at 0.5, we obtained a kappa of 0.54 (95%CI: 0.48-0.59), 70% sensitivity (378/537, 95%CI: 66-74%) and 88% specificity (269/307, 95%CI: 83-91%) in the validation cohort. The model accurately predicts DWI-FLAIR mismatch in AIS patients with unknown stroke onset. It could aid readers when visual rating is challenging, or FLAIR unavailable.

Yamagishi Y, Nakamura Y, Kikuchi T, Sonoda Y, Hirakawa H, Kano S, Nakamura S, Hanaoka S, Yoshikawa T, Abe O

pubmed logopapersAug 28 2025
Recent advances in large language models have highlighted the need for high-quality multilingual medical datasets. Although Japan is a global leader in computed tomography (CT) scanner deployment and use, the absence of large-scale Japanese radiology datasets has hindered the development of specialized language models for medical imaging analysis. Despite the emergence of multilingual models and language-specific adaptations, the development of Japanese-specific medical language models has been constrained by a lack of comprehensive datasets, particularly in radiology. This study aims to address this critical gap in Japanese medical natural language processing resources, for which a comprehensive Japanese CT report dataset was developed through machine translation, to establish a specialized language model for structured classification. In addition, a rigorously validated evaluation dataset was created through expert radiologist refinement to ensure a reliable assessment of model performance. We translated the CT-RATE dataset (24,283 CT reports from 21,304 patients) into Japanese using GPT-4o mini. The training dataset consisted of 22,778 machine-translated reports, and the validation dataset included 150 reports carefully revised by radiologists. We developed CT-BERT-JPN, a specialized Bidirectional Encoder Representations from Transformers (BERT) model for Japanese radiology text, based on the "tohoku-nlp/bert-base-japanese-v3" architecture, to extract 18 structured findings from reports. Translation quality was assessed with Bilingual Evaluation Understudy (BLEU) and Recall-Oriented Understudy for Gisting Evaluation (ROUGE) scores and further evaluated by radiologists in a dedicated human-in-the-loop experiment. In that experiment, each of a randomly selected subset of reports was independently reviewed by 2 radiologists-1 senior (postgraduate year [PGY] 6-11) and 1 junior (PGY 4-5)-using a 5-point Likert scale to rate: (1) grammatical correctness, (2) medical terminology accuracy, and (3) overall readability. Inter-rater reliability was measured via quadratic weighted kappa (QWK). Model performance was benchmarked against GPT-4o using accuracy, precision, recall, F1-score, ROC (receiver operating characteristic)-AUC (area under the curve), and average precision. General text structure was preserved (BLEU: 0.731 findings, 0.690 impression; ROUGE: 0.770-0.876 findings, 0.748-0.857 impression), though expert review identified 3 categories of necessary refinements-contextual adjustment of technical terms, completion of incomplete translations, and localization of Japanese medical terminology. The radiologist-revised translations scored significantly higher than raw machine translations across all dimensions, and all improvements were statistically significant (P<.001). CT-BERT-JPN outperformed GPT-4o on 11 of 18 findings (61%), achieving perfect F1-scores for 4 conditions and F1-score >0.95 for 14 conditions, despite varied sample sizes (7-82 cases). Our study established a robust Japanese CT report dataset and demonstrated the effectiveness of a specialized language model in structured classification of findings. This hybrid approach of machine translation and expert validation enabled the creation of large-scale datasets while maintaining high-quality standards. This study provides essential resources for advancing medical artificial intelligence research in Japanese health care settings, using datasets and models publicly available for research to facilitate further advancement in the field.

Du T, Li C, Grzegozek M, Huang X, Rahaman M, Wang X, Sun H

pubmed logopapersAug 28 2025
PurposeThe prediction of immunotherapy efficacy in cervical cancer patients remains a critical clinical challenge. This study aims to develop and validate a deep learning-based automatic tumor segmentation method on PET/CT images, extract texture features from the tumor regions in cervical cancer patients, and investigate their correlation with PD-L1 expression. Furthermore, a predictive model for immunotherapy efficacy will be constructed.MethodsWe retrospectively collected data from 283 pathologically confirmed cervical cancer patients who underwent <sup>18</sup>F-FDG PET/CT examinations, divided into three subsets. Subset-I (n = 97) was used to develop a deep learning-based segmentation model using Attention-UNet and region-growing methods on co-registered PET/CT images. Subset-II (n = 101) was used to explore correlations between radiomic features and PD-L1 expression. Subset-III (n = 85) was used to construct and validate a radiomic model for predicting immunotherapy response.ResultsUsing Subset-I, a segmentation model was developed. The segmentation model achieved optimal performance at the 94th epoch with an IoU of 0.746 in the validation set. Manual evaluation confirmed accurate tumor localization. Sixteen features demonstrated excellent reproducibility (ICC > 0.75). Using Subset-II, PD-L1-correlated features were extracted and identified. In Subset-II, 183 features showed significant correlations with PD-L1 expression (P < 0.05).Using these features in Subset-III, a predictive model for immunotherapy efficacy was constructed and evaluated. In Subset-III, the SVM-based radiomic model achieved the best predictive performance with an AUC of 0.935.ConclusionWe validated, respectively in Subset-I, Subset-II, and Subset-III, that deep learning models incorporating medical prior knowledge can accurately and automatically segment cervical cancer lesions, that texture features extracted from <sup>18</sup>F-FDG PET/CT are significantly associated with PD-L1 expression, and that predictive models based on these features can effectively predict the efficacy of PD-L1 immunotherapy. This approach offers a non-invasive, efficient, and cost-effective tool for guiding individualized immunotherapy in cervical cancer patients and may help reduce patient burden, accelerate treatment planning.
Page 217 of 6546537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.