Sort by:
Page 53 of 3463455 results

Hybrid quantum-classical-quantum convolutional neural networks.

Long C, Huang M, Ye X, Futamura Y, Sakurai T

pubmed logopapersAug 28 2025
Deep learning has achieved significant success in pattern recognition, with convolutional neural networks (CNNs) serving as a foundational architecture for extracting spatial features from images. Quantum computing provides an alternative computational framework, a hybrid quantum-classical convolutional neural networks (QCCNNs) leverage high-dimensional Hilbert spaces and entanglement to surpass classical CNNs in image classification accuracy under comparable architectures. Despite performance improvements, QCCNNs typically use fixed quantum layers without incorporating trainable quantum parameters. This limits their ability to capture non-linear quantum representations and separates the model from the potential advantages of expressive quantum learning. In this work, we present a hybrid quantum-classical-quantum convolutional neural network (QCQ-CNN) that incorporates a quantum convolutional filter, a shallow classical CNN, and a trainable variational quantum classifier. This architecture aims to enhance the expressivity of decision boundaries in image classification tasks by introducing tunable quantum parameters into the end-to-end learning process. Through a series of small-sample experiments on MNIST, F-MNIST, and MRI tumor datasets, QCQ-CNN demonstrates competitive accuracy and convergence behavior compared to classical and hybrid baselines. We further analyze the effect of ansatz depth and find that moderate-depth quantum circuits can improve learning stability without introducing excessive complexity. Additionally, simulations incorporating depolarizing noise and finite sampling shots suggest that QCQ-CNN maintains a certain degree of robustness under realistic quantum noise conditions. While our results are currently limited to simulations with small-scale quantum circuits, the proposed approach offers a potentially promising direction for hybrid quantum learning in near-term applications.

AI-driven body composition monitoring and its prognostic role in mCRPC undergoing lutetium-177 PSMA radioligand therapy: insights from a retrospective single-center analysis.

Ruhwedel T, Rogasch J, Galler M, Schatka I, Wetz C, Furth C, Biernath N, De Santis M, Shnayien S, Kolck J, Geisel D, Amthauer H, Beetz NL

pubmed logopapersAug 28 2025
Body composition (BC) analysis is performed to quantify the relative amounts of different body tissues as a measure of physical fitness and tumor cachexia. We hypothesized that relative changes in body composition (BC) parameters, assessed by an artificial intelligence-based, PACS-integrated software, between baseline imaging before the start of radioligand therapy (RLT) and interim staging after two RLT cycles could predict overall survival (OS) in patients with metastatic castration-resistant prostate cancer. We conducted a single-center, retrospective analysis of 92 patients with mCRPC undergoing [<sup>177</sup>Lu]Lu-PSMA RLT between September 2015 and December 2023. All patients had [<sup>68</sup> Ga]Ga-PSMA-11 PET/CT at baseline (≤ 6 weeks before the first RLT cycle) and at interim staging (6-8 weeks after the second RLT cycle) allowing for longitudinal BC assessment. During follow-up, 78 patients (85%) died. Median OS was 16.3 months. Median follow-up time in survivors was 25.6 months. The 1 year mortality rate was 32.6% (95%CI 23.0-42.2%) and the 5 year mortality rate was 92.9% (95%CI 85.8-100.0%). In multivariable regression, relative change in visceral adipose tissue (VAT) (HR: 0.26; p = 0.006), previous chemotherapy of any type (HR: 2.4; p = 0.003), the presence of liver metastases (HR: 2.4; p = 0.018) and a higher baseline De Ritis ratio (HR: 1.4; p < 0.001) remained independent predictors of OS. Patients with a higher decrease in VAT (< -20%) had a median OS of 10.2 months versus 18.5 months in patients with a lower VAT decrease or VAT increase (≥ -20%) (log-rank test: p = 0.008). In a separate Cox model, the change in VAT predicted OS (p = 0.005) independent of the best PSA response after 1-2 RLT cycles (p = 0.09), and there was no interaction between the two (p = 0.09). PACS-Integrated, AI-based BC monitoring detects relative changes in the VAT, Which was an independent predictor of shorter OS in our population of patients undergoing RLT.

Automated system of analysis to quantify pediatric hip morphology.

Gartland CN, Healy J, Lynham RS, Nowlan NC, Green C, Redmond SJ

pubmed logopapersAug 28 2025
Developmental dysplasia of the hip (DDH), a developmental deformity with an incidence of 0.1-3.4%, lacks an objective and reliable definition and assessment metric by which to conduct timely diagnosis. This work aims to address this challenge by developing a system of analysis to accurately detect 22 key anatomical landmarks in anteroposterior pelvic radiographs of the juvenile hip, from which a range of novel salient morphological measures can be derived. A coarse-to-fine approach was implemented, with six model variations of the U-Net deep neural network architecture compared for the coarse model and four variations for the fine model; model variations included differences in data augmentation applied, image input size, network attention gates, and loss function design. The best performing combination achieved a root-mean-square error in the positional accuracy of landmark detection of 3.79 mm with a bias and precision in the x-direction of 0.03 ± 17.6 mm and y-direction of 1.76 ± 22.5 mm in the image frame of reference. Average errors for each morphological metric are in line with the performance of clinical experts. Future work will use this system to perform a population analysis to accurately characterize hip joint morphology and develop an objective and reliable assessment metric for DDH.

Mask-Guided Multi-Channel SwinUNETR Framework for Robust MRI Classification

Smriti Joshi, Lidia Garrucho, Richard Osuala, Oliver Diaz, Karim Lekadir

arxiv logopreprintAug 28 2025
Breast cancer is one of the leading causes of cancer-related mortality in women, and early detection is essential for improving outcomes. Magnetic resonance imaging (MRI) is a highly sensitive tool for breast cancer detection, particularly in women at high risk or with dense breast tissue, where mammography is less effective. The ODELIA consortium organized a multi-center challenge to foster AI-based solutions for breast cancer diagnosis and classification. The dataset included 511 studies from six European centers, acquired on scanners from multiple vendors at both 1.5 T and 3 T. Each study was labeled for the left and right breast as no lesion, benign lesion, or malignant lesion. We developed a SwinUNETR-based deep learning framework that incorporates breast region masking, extensive data augmentation, and ensemble learning to improve robustness and generalizability. Our method achieved second place on the challenge leaderboard, highlighting its potential to support clinical breast MRI interpretation. We publicly share our codebase at https://github.com/smriti-joshi/bcnaim-odelia-challenge.git.

Dual-model approach for accurate chest disease detection using GViT and swin transformer V2.

Ahmad K, Rehman HU, Shah B, Ali F, Hussain I

pubmed logopapersAug 28 2025
The precise detection and localization of abnormalities in radiological images are very crucial for clinical diagnosis and treatment planning. To build reliable models, large and annotated datasets are required that contain disease labels and abnormality locations. Most of the time, radiologists face challenges in identifying and segmenting thoracic diseases such as COVID-19, Pneumonia, Tuberculosis, and lung cancer due to overlapping visual patterns in X-ray images. This study proposes a dual-model approach: Gated Vision Transformers (GViT) for classification and Swin Transformer V2 for segmentation and localization. GViT successfully identifies thoracic diseases that exhibit similar radiographic features, while Swin Transformer V2 maps lung areas and pinpoints affected regions. Classification metrics, including precision, recall, and F1-scores, surpassed 0.95 while the Intersection over Union (IoU) score reached 90.98%. Performance assessment via Dice Coefficient, Boundary F1-Score, and Hausdorff Distance demonstrated the system's excellent effectiveness. This artificial intelligence solution will help radiologists in decreasing their mental workload while improving diagnostic precision in healthcare systems that face resource constraints. Transformer-based architectures show strong promise for enhancing medical imaging procedures, according to the study results. Future AI tools should build on this foundation, focusing on comprehensive and precise detection of chest diseases to support effective clinical decision-making.

Canadian radiology: 2025 update.

Yao J, Ahmad W, Cheng S, Costa AF, Ertl-Wagner BB, Nicolaou S, Souza C, Patlas MN

pubmed logopapersAug 28 2025
Radiology in Canada is evolving through a combination of clinical innovation, collaborative research and the adoption of advanced imaging technologies. This overview highlights contributions from selected academic centres across the country that are shaping diagnostic and interventional practice. At Dalhousie University, researchers have led efforts to improve contrast media safety, refine imaging techniques for hepatopancreatobiliary diseases, and develop peer learning programs that support continuous quality improvement. The University of Ottawa has made advances in radiomics, magnetic resonance imaging protocols, and virtual reality applications for surgical planning, while contributing to global research networks focused on evaluating LI-RADS performance. At the University of British Columbia, the implementation of photon-counting CT, dual-energy CT, and artificial intelligence tools is enhancing diagnostic precision in oncology, trauma, and stroke imaging. The Hospital for Sick Children is a leader in paediatric radiology, with work ranging from artificial intelligence (AI) brain tumour classification to innovations in foetal MRI and congenital heart disease imaging. Together, these initiatives reflect the strength and diversity of Canadian radiology, demonstrating a shared commitment to advancing patient care through innovation, data-driven practice and collaboration.

Development of a Large-Scale Dataset of Chest Computed Tomography Reports in Japanese and a High-Performance Finding Classification Model: Dataset Development and Validation Study.

Yamagishi Y, Nakamura Y, Kikuchi T, Sonoda Y, Hirakawa H, Kano S, Nakamura S, Hanaoka S, Yoshikawa T, Abe O

pubmed logopapersAug 28 2025
Recent advances in large language models have highlighted the need for high-quality multilingual medical datasets. Although Japan is a global leader in computed tomography (CT) scanner deployment and use, the absence of large-scale Japanese radiology datasets has hindered the development of specialized language models for medical imaging analysis. Despite the emergence of multilingual models and language-specific adaptations, the development of Japanese-specific medical language models has been constrained by a lack of comprehensive datasets, particularly in radiology. This study aims to address this critical gap in Japanese medical natural language processing resources, for which a comprehensive Japanese CT report dataset was developed through machine translation, to establish a specialized language model for structured classification. In addition, a rigorously validated evaluation dataset was created through expert radiologist refinement to ensure a reliable assessment of model performance. We translated the CT-RATE dataset (24,283 CT reports from 21,304 patients) into Japanese using GPT-4o mini. The training dataset consisted of 22,778 machine-translated reports, and the validation dataset included 150 reports carefully revised by radiologists. We developed CT-BERT-JPN, a specialized Bidirectional Encoder Representations from Transformers (BERT) model for Japanese radiology text, based on the "tohoku-nlp/bert-base-japanese-v3" architecture, to extract 18 structured findings from reports. Translation quality was assessed with Bilingual Evaluation Understudy (BLEU) and Recall-Oriented Understudy for Gisting Evaluation (ROUGE) scores and further evaluated by radiologists in a dedicated human-in-the-loop experiment. In that experiment, each of a randomly selected subset of reports was independently reviewed by 2 radiologists-1 senior (postgraduate year [PGY] 6-11) and 1 junior (PGY 4-5)-using a 5-point Likert scale to rate: (1) grammatical correctness, (2) medical terminology accuracy, and (3) overall readability. Inter-rater reliability was measured via quadratic weighted kappa (QWK). Model performance was benchmarked against GPT-4o using accuracy, precision, recall, F1-score, ROC (receiver operating characteristic)-AUC (area under the curve), and average precision. General text structure was preserved (BLEU: 0.731 findings, 0.690 impression; ROUGE: 0.770-0.876 findings, 0.748-0.857 impression), though expert review identified 3 categories of necessary refinements-contextual adjustment of technical terms, completion of incomplete translations, and localization of Japanese medical terminology. The radiologist-revised translations scored significantly higher than raw machine translations across all dimensions, and all improvements were statistically significant (P<.001). CT-BERT-JPN outperformed GPT-4o on 11 of 18 findings (61%), achieving perfect F1-scores for 4 conditions and F1-score >0.95 for 14 conditions, despite varied sample sizes (7-82 cases). Our study established a robust Japanese CT report dataset and demonstrated the effectiveness of a specialized language model in structured classification of findings. This hybrid approach of machine translation and expert validation enabled the creation of large-scale datasets while maintaining high-quality standards. This study provides essential resources for advancing medical artificial intelligence research in Japanese health care settings, using datasets and models publicly available for research to facilitate further advancement in the field.

Automated segmentation of soft X-ray tomography: native cellular structure with sub-micron resolution at high throughput for whole-cell quantitative imaging in yeast.

Chen J, Mirvis M, Ekman A, Vanslembrouck B, Gros ML, Larabell C, Marshall WF

pubmed logopapersAug 28 2025
Soft X-ray tomography (SXT) is an invaluable tool for quantitatively analyzing cellular structures at sub-optical isotropic resolution. However, it has traditionally depended on manual segmentation, limiting its scalability for large datasets. Here, we leverage a deep learning-based auto-segmentation pipeline to segment and label cellular structures in hundreds of cells across three <i>Saccharomyces cerevisiae</i> strains. This task-based pipeline employs manual iterative refinement to improve segmentation accuracy for key structures, including the cell body, nucleus, vacuole, and lipid droplets, enabling high-throughput and precise phenotypic analysis. Using this approach, we quantitatively compared the 3D whole-cell morphometric characteristics of wild-type, VPH1-GFP, and <i>vac14</i> strains, uncovering detailed strain-specific cell and organelle size and shape variations. We show the utility of SXT data for precise 3D curvature analysis of entire organelles and cells and detection of fine morphological features using surface meshes. Our approach facilitates comparative analyses with high spatial precision and statistical throughput, uncovering subtle morphological features at the single-cell and population level. This workflow significantly enhances our ability to characterize cell anatomy and supports scalable studies on the mesoscale, with applications in investigating cellular architecture, organelle biology, and genetic research across diverse biological contexts. [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text].

Nasopharyngeal cancer adaptive radiotherapy with CBCT-derived synthetic CT: deep learning-based auto-segmentation precision and dose calculation consistency on a C-Arm linac.

Lei W, Han L, Cao Z, Duan T, Wang B, Li C, Pei X

pubmed logopapersAug 28 2025
To evaluate the precision of automated segmentation facilitated by deep learning (DL) and dose calculation in adaptive radiotherapy (ART) for nasopharyngeal cancer (NPC), leveraging synthetic CT (sCT) images derived from cone-beam CT (CBCT) scans on a conventional C-arm linac. Sixteen NPC patients undergoing a two-phase offline ART were analyzed retrospectively. The initial (pCT<sub>1</sub>) and adaptive (pCT<sub>2</sub>) CT scans served as gold standard alongside weekly acquired CBCT scans. Patient data, including manually delineated contours and dose information, were imported into ArcherQA. Using a cycle-consistent generative adversarial network (cycle-GAN) trained on an independent dataset, sCT images (sCT<sub>1</sub>, sCT<sub>4</sub>, sCT<sub>4</sub><sup>*</sup>) were generated from weekly CBCT scans (CBCT<sub>1</sub>, CBCT<sub>4</sub>, CBCT<sub>4</sub>) paired with corresponding planning CTs (pCT<sub>1</sub>, pCT<sub>1</sub>, pCT<sub>2</sub>). Auto-segmentation was performed on sCTs, followed by GPU-accelerated Monte Carlo dose recalculation. Auto-segmentation accuracy was assessed via Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (HD<sub>95</sub>). Dose calculation fidelity on sCTs was evaluated using dose-volume parameters. Dosimetric consistency between recalculated sCT and pCT plans was analyzed via Spearman's correlation, while volumetric changes were concurrently evaluated to quantify anatomical variations. Most anatomical structures demonstrated high pCT-sCT agreement, with mean values of DSC > 0.85 and HD<sub>95</sub> < 5.10 mm. Notable exceptions included the primary Gross Tumor Volume (GTVp) in the pCT<sub>2</sub>-sCT<sub>4</sub> comparison (DSC: 0.75, HD<sub>95</sub>: 6.03 mm), involved lymph node (GTVn) showing lower agreement (DSC: 0.43, HD<sub>95</sub>: 16.42 mm), and submandibular glands with moderate agreement (DSC: 0.64-0.73, HD<sub>95</sub>: 4.45-5.66 mm). Dosimetric analysis revealed the largest mean differences in GTVn D<sub>99</sub>: -1.44 Gy (95% CI: [-3.01, 0.13] Gy) and right parotid mean dose: -1.94 Gy (95% CI: [-3.33, -0.55] Gy, p < 0.05). Anatomical variations, quantified via sCTs measurements, correlated significantly with offline adaptive plan adjustments in ART. This correlation was strong for parotid glands (ρ > 0.72, p < 0.001), a result that aligned with sCT-derived dose discrepancy analysis (ρ > 0.57, p < 0.05). The proposed method exhibited minor variations in volumetric and dosimetric parameters compared to prior treatment data, suggesting potential efficiency improvements for ART in NPC through reduced human dependency.

PET/CT radiomics for non-invasive prediction of immunotherapy efficacy in cervical cancer.

Du T, Li C, Grzegozek M, Huang X, Rahaman M, Wang X, Sun H

pubmed logopapersAug 28 2025
PurposeThe prediction of immunotherapy efficacy in cervical cancer patients remains a critical clinical challenge. This study aims to develop and validate a deep learning-based automatic tumor segmentation method on PET/CT images, extract texture features from the tumor regions in cervical cancer patients, and investigate their correlation with PD-L1 expression. Furthermore, a predictive model for immunotherapy efficacy will be constructed.MethodsWe retrospectively collected data from 283 pathologically confirmed cervical cancer patients who underwent <sup>18</sup>F-FDG PET/CT examinations, divided into three subsets. Subset-I (n = 97) was used to develop a deep learning-based segmentation model using Attention-UNet and region-growing methods on co-registered PET/CT images. Subset-II (n = 101) was used to explore correlations between radiomic features and PD-L1 expression. Subset-III (n = 85) was used to construct and validate a radiomic model for predicting immunotherapy response.ResultsUsing Subset-I, a segmentation model was developed. The segmentation model achieved optimal performance at the 94th epoch with an IoU of 0.746 in the validation set. Manual evaluation confirmed accurate tumor localization. Sixteen features demonstrated excellent reproducibility (ICC > 0.75). Using Subset-II, PD-L1-correlated features were extracted and identified. In Subset-II, 183 features showed significant correlations with PD-L1 expression (P < 0.05).Using these features in Subset-III, a predictive model for immunotherapy efficacy was constructed and evaluated. In Subset-III, the SVM-based radiomic model achieved the best predictive performance with an AUC of 0.935.ConclusionWe validated, respectively in Subset-I, Subset-II, and Subset-III, that deep learning models incorporating medical prior knowledge can accurately and automatically segment cervical cancer lesions, that texture features extracted from <sup>18</sup>F-FDG PET/CT are significantly associated with PD-L1 expression, and that predictive models based on these features can effectively predict the efficacy of PD-L1 immunotherapy. This approach offers a non-invasive, efficient, and cost-effective tool for guiding individualized immunotherapy in cervical cancer patients and may help reduce patient burden, accelerate treatment planning.
Page 53 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.