Sort by:
Page 52 of 3463455 results

Perivascular inflammation in the progression of aortic aneurysms in Marfan syndrome.

Sowa H, Yagi H, Ueda K, Hashimoto M, Karasaki K, Liu Q, Kurozumi A, Adachi Y, Yanase T, Okamura S, Zhai B, Takeda N, Ando M, Yamauchi H, Ito N, Ono M, Akazawa H, Komuro I

pubmed logopapersAug 28 2025
Inflammation plays important roles in the pathogenesis of vascular diseases. We here show the involvement of perivascular inflammation in aortic dilatation of Marfan syndrome (MFS). In the aorta of MFS patients and Fbn1C1041G/+ mice, macrophages markedly accumulated in periaortic tissues with increased inflammatory cytokine expression. Metabolic inflammatory stress induced by a high-fat diet (HFD) enhanced vascular inflammation predominantly in periaortic tissues and accelerated aortic dilatation in Fbn1C1041G/+ mice, both of which were inhibited by low-dose pitavastatin. HFD feeding also intensifies structural disorganization of the tunica media in Fbn1C1041G/+ mice, including elastic fiber fragmentation, fibrosis, and proteoglycan accumulation, along with increased activation of TGF-β downstream targets. Pitavastatin treatment mitigated these alterations. For non-invasive assessment of PVAT inflammation in a clinical setting, we developed an automated analysis program for CT images using machine learning techniques to calculate the perivascular fat attenuation index of the ascending aorta (AA-FAI), correlating with periaortic fat inflammation. The AA-FAI was significantly higher in patients with MFS compared to patients without hereditary connective tissue disorders. These results suggest that perivascular inflammation contributes to aneurysm formation in MFS and might be a potential target for preventing and treating vascular events in MFS.

Deep Learning Framework for Early Detection of Pancreatic Cancer Using Multi-Modal Medical Imaging Analysis

Dennis Slobodzian, Karissa Tilbury, Amir Kordijazi

arxiv logopreprintAug 28 2025
Pacreatic ductal adenocarcinoma (PDAC) remains one of the most lethal forms of cancer, with a five-year survival rate below 10% primarily due to late detection. This research develops and validates a deep learning framework for early PDAC detection through analysis of dual-modality imaging: autofluorescence and second harmonic generation (SHG). We analyzed 40 unique patient samples to create a specialized neural network capable of distinguishing between normal, fibrotic, and cancerous tissue. Our methodology evaluated six distinct deep learning architectures, comparing traditional Convolutional Neural Networks (CNNs) with modern Vision Transformers (ViTs). Through systematic experimentation, we identified and overcome significant challenges in medical image analysis, including limited dataset size and class imbalance. The final optimized framework, based on a modified ResNet architecture with frozen pre-trained layers and class-weighted training, achieved over 90% accuracy in cancer detection. This represents a significant improvement over current manual analysis methods an demonstrates potential for clinical deployment. This work establishes a robust pipeline for automated PDAC detection that can augment pathologists' capabilities while providing a foundation for future expansion to other cancer types. The developed methodology also offers valuable insights for applying deep learning to limited-size medical imaging datasets, a common challenge in clinical applications.

Automated segmentation of soft X-ray tomography: native cellular structure with sub-micron resolution at high throughput for whole-cell quantitative imaging in yeast.

Chen J, Mirvis M, Ekman A, Vanslembrouck B, Gros ML, Larabell C, Marshall WF

pubmed logopapersAug 28 2025
Soft X-ray tomography (SXT) is an invaluable tool for quantitatively analyzing cellular structures at sub-optical isotropic resolution. However, it has traditionally depended on manual segmentation, limiting its scalability for large datasets. Here, we leverage a deep learning-based auto-segmentation pipeline to segment and label cellular structures in hundreds of cells across three <i>Saccharomyces cerevisiae</i> strains. This task-based pipeline employs manual iterative refinement to improve segmentation accuracy for key structures, including the cell body, nucleus, vacuole, and lipid droplets, enabling high-throughput and precise phenotypic analysis. Using this approach, we quantitatively compared the 3D whole-cell morphometric characteristics of wild-type, VPH1-GFP, and <i>vac14</i> strains, uncovering detailed strain-specific cell and organelle size and shape variations. We show the utility of SXT data for precise 3D curvature analysis of entire organelles and cells and detection of fine morphological features using surface meshes. Our approach facilitates comparative analyses with high spatial precision and statistical throughput, uncovering subtle morphological features at the single-cell and population level. This workflow significantly enhances our ability to characterize cell anatomy and supports scalable studies on the mesoscale, with applications in investigating cellular architecture, organelle biology, and genetic research across diverse biological contexts. [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text] [Media: see text].

Adapting Foundation Model for Dental Caries Detection with Dual-View Co-Training

Tao Luo, Han Wu, Tong Yang, Dinggang Shen, Zhiming Cui

arxiv logopreprintAug 28 2025
Accurate dental caries detection from panoramic X-rays plays a pivotal role in preventing lesion progression. However, current detection methods often yield suboptimal accuracy due to subtle contrast variations and diverse lesion morphology of dental caries. In this work, inspired by the clinical workflow where dentists systematically combine whole-image screening with detailed tooth-level inspection, we present DVCTNet, a novel Dual-View Co-Training network for accurate dental caries detection. Our DVCTNet starts with employing automated tooth detection to establish two complementary views: a global view from panoramic X-ray images and a local view from cropped tooth images. We then pretrain two vision foundation models separately on the two views. The global-view foundation model serves as the detection backbone, generating region proposals and global features, while the local-view model extracts detailed features from corresponding cropped tooth patches matched by the region proposals. To effectively integrate information from both views, we introduce a Gated Cross-View Attention (GCV-Atten) module that dynamically fuses dual-view features, enhancing the detection pipeline by integrating the fused features back into the detection model for final caries detection. To rigorously evaluate our DVCTNet, we test it on a public dataset and further validate its performance on a newly curated, high-precision dental caries detection dataset, annotated using both intra-oral images and panoramic X-rays for double verification. Experimental results demonstrate DVCTNet's superior performance against existing state-of-the-art (SOTA) methods on both datasets, indicating the clinical applicability of our method. Our code and labeled dataset are available at https://github.com/ShanghaiTech-IMPACT/DVCTNet.

Deep Learning-Based Generation of DSC MRI Parameter Maps Using Dynamic Contrast-Enhanced MRI Data.

Pei H, Lyu Y, Lambrecht S, Lin D, Feng L, Liu F, Nyquist P, van Zijl P, Knutsson L, Xu X

pubmed logopapersAug 28 2025
Perfusion and perfusion-related parameter maps obtained by using DSC MRI and dynamic contrast-enhanced (DCE) MRI are both useful for clinical diagnosis and research. However, using both DSC and DCE MRI in the same scan session requires 2 doses of gadolinium contrast agent. The objective was to develop deep learning-based methods to synthesize DSC-derived parameter maps from DCE MRI data. Independent analysis of data collected in previous studies was performed. The database contained 64 participants, including patients with and without brain tumors. The reference parameter maps were measured from DSC MRI performed after DCE MRI. A conditional generative adversarial network (cGAN) was designed and trained to generate synthetic DSC-derived maps from DCE MRI data. The median parameter values and distributions between synthetic and real maps were compared by using linear regression and Bland-Altman plots. Using cGAN, realistic DSC parameter maps could be synthesized from DCE MRI data. For controls without brain tumors, the synthesized parameters had distributions similar to the ground truth values. For patients with brain tumors, the synthesized parameters in the tumor region correlated linearly with the ground truth values. In addition, areas not visible due to susceptibility artifacts in real DSC maps could be visualized by using DCE-derived DSC maps. DSC-derived parameter maps could be synthesized by using DCE MRI data, including susceptibility-artifact-prone regions. This shows the potential to obtain both DSC and DCE parameter maps from DCE MRI by using a single dose of contrast agent.

High-Resolution 3T MRI of the Membranous Labyrinth Using Deep Learning Reconstruction.

Boubaker F, Lane JI, Puel U, Drouot G, Witte RJ, Ambarki K, Teixeira PAG, Blum A, Parietti-Winkler C, Vallee JN, Gillet R, Eliezer M

pubmed logopapersAug 28 2025
The labyrinth is a complex anatomical structure in the temporal bone. However, high-resolution imaging of its membranous portion is challenging due to its small size and the limitations of current MRI techniques. Deep Learning Reconstruction (DLR) represents a promising approach to advancing MRI image quality, enabling higher spatial resolution and reduced noise. This study aims to evaluate DLR-High-Resolution 3D-T2 MRI sequences for visualizing the labyrinthine structures, comparing them to conventional 3D-T2 sequences. The goal is to improve spatial resolution without prolonging acquisition times, allowing a more detailed view of the labyrinthine microanatomy. High-resolution heavy T2-weighted TSE SPACE images were acquired in patients using 3D-T2 and DLR-3D-T2. Two radiologists rated structure visibility on a four-point qualitative scale for the spiral lamina, scala tympani, scala vestibuli, scala media, utricle, saccule, utricular and saccular maculae, membranous semicircular ducts, and ampullary nerves. Ex vivo 9.4T MRI served as an anatomical reference. DLR-3D-T2 significantly improved the visibility of several inner ear structures. The utricle and utricular macula were systematically visualized, achieving grades ≥3 in 95% of cases (p < 0.001), while the saccule remained challenging to assess, with grades ≥3 in only 10% of cases. The cochlear spiral lamina and scala tympani were better delineated in the first two turns but remained poorly visible in the apical turn. Semicircular ducts were only partially visualized, with grades ≥3 in 12.5-20% of cases, likely due to resolution limitations relative to their diameter. Ampullary nerves were moderately improved, with grades ≥3 in 52.5-55% of cases, depending on the nerve. While DLR does not yet provide a complete anatomical assessment, it represents a significant step forward in the non-invasive evaluation of inner ear structures. Pending further technical refinements, this approach may help reduce reliance on delayed gadolinium-enhanced techniques for imaging membranous structures. 3D-T2 = Three-dimensional T2-weighted turbo spin-echo; DLR-3D-T2 = improved T2 weighted turbo spinecho sequence incorporating Deep Learning Reconstruction; DLR = Deep Learning Reconstruction.

Canadian radiology: 2025 update.

Yao J, Ahmad W, Cheng S, Costa AF, Ertl-Wagner BB, Nicolaou S, Souza C, Patlas MN

pubmed logopapersAug 28 2025
Radiology in Canada is evolving through a combination of clinical innovation, collaborative research and the adoption of advanced imaging technologies. This overview highlights contributions from selected academic centres across the country that are shaping diagnostic and interventional practice. At Dalhousie University, researchers have led efforts to improve contrast media safety, refine imaging techniques for hepatopancreatobiliary diseases, and develop peer learning programs that support continuous quality improvement. The University of Ottawa has made advances in radiomics, magnetic resonance imaging protocols, and virtual reality applications for surgical planning, while contributing to global research networks focused on evaluating LI-RADS performance. At the University of British Columbia, the implementation of photon-counting CT, dual-energy CT, and artificial intelligence tools is enhancing diagnostic precision in oncology, trauma, and stroke imaging. The Hospital for Sick Children is a leader in paediatric radiology, with work ranging from artificial intelligence (AI) brain tumour classification to innovations in foetal MRI and congenital heart disease imaging. Together, these initiatives reflect the strength and diversity of Canadian radiology, demonstrating a shared commitment to advancing patient care through innovation, data-driven practice and collaboration.

Dual-model approach for accurate chest disease detection using GViT and swin transformer V2.

Ahmad K, Rehman HU, Shah B, Ali F, Hussain I

pubmed logopapersAug 28 2025
The precise detection and localization of abnormalities in radiological images are very crucial for clinical diagnosis and treatment planning. To build reliable models, large and annotated datasets are required that contain disease labels and abnormality locations. Most of the time, radiologists face challenges in identifying and segmenting thoracic diseases such as COVID-19, Pneumonia, Tuberculosis, and lung cancer due to overlapping visual patterns in X-ray images. This study proposes a dual-model approach: Gated Vision Transformers (GViT) for classification and Swin Transformer V2 for segmentation and localization. GViT successfully identifies thoracic diseases that exhibit similar radiographic features, while Swin Transformer V2 maps lung areas and pinpoints affected regions. Classification metrics, including precision, recall, and F1-scores, surpassed 0.95 while the Intersection over Union (IoU) score reached 90.98%. Performance assessment via Dice Coefficient, Boundary F1-Score, and Hausdorff Distance demonstrated the system's excellent effectiveness. This artificial intelligence solution will help radiologists in decreasing their mental workload while improving diagnostic precision in healthcare systems that face resource constraints. Transformer-based architectures show strong promise for enhancing medical imaging procedures, according to the study results. Future AI tools should build on this foundation, focusing on comprehensive and precise detection of chest diseases to support effective clinical decision-making.

AI-driven body composition monitoring and its prognostic role in mCRPC undergoing lutetium-177 PSMA radioligand therapy: insights from a retrospective single-center analysis.

Ruhwedel T, Rogasch J, Galler M, Schatka I, Wetz C, Furth C, Biernath N, De Santis M, Shnayien S, Kolck J, Geisel D, Amthauer H, Beetz NL

pubmed logopapersAug 28 2025
Body composition (BC) analysis is performed to quantify the relative amounts of different body tissues as a measure of physical fitness and tumor cachexia. We hypothesized that relative changes in body composition (BC) parameters, assessed by an artificial intelligence-based, PACS-integrated software, between baseline imaging before the start of radioligand therapy (RLT) and interim staging after two RLT cycles could predict overall survival (OS) in patients with metastatic castration-resistant prostate cancer. We conducted a single-center, retrospective analysis of 92 patients with mCRPC undergoing [<sup>177</sup>Lu]Lu-PSMA RLT between September 2015 and December 2023. All patients had [<sup>68</sup> Ga]Ga-PSMA-11 PET/CT at baseline (≤ 6 weeks before the first RLT cycle) and at interim staging (6-8 weeks after the second RLT cycle) allowing for longitudinal BC assessment. During follow-up, 78 patients (85%) died. Median OS was 16.3 months. Median follow-up time in survivors was 25.6 months. The 1 year mortality rate was 32.6% (95%CI 23.0-42.2%) and the 5 year mortality rate was 92.9% (95%CI 85.8-100.0%). In multivariable regression, relative change in visceral adipose tissue (VAT) (HR: 0.26; p = 0.006), previous chemotherapy of any type (HR: 2.4; p = 0.003), the presence of liver metastases (HR: 2.4; p = 0.018) and a higher baseline De Ritis ratio (HR: 1.4; p < 0.001) remained independent predictors of OS. Patients with a higher decrease in VAT (< -20%) had a median OS of 10.2 months versus 18.5 months in patients with a lower VAT decrease or VAT increase (≥ -20%) (log-rank test: p = 0.008). In a separate Cox model, the change in VAT predicted OS (p = 0.005) independent of the best PSA response after 1-2 RLT cycles (p = 0.09), and there was no interaction between the two (p = 0.09). PACS-Integrated, AI-based BC monitoring detects relative changes in the VAT, Which was an independent predictor of shorter OS in our population of patients undergoing RLT.

Hybrid quantum-classical-quantum convolutional neural networks.

Long C, Huang M, Ye X, Futamura Y, Sakurai T

pubmed logopapersAug 28 2025
Deep learning has achieved significant success in pattern recognition, with convolutional neural networks (CNNs) serving as a foundational architecture for extracting spatial features from images. Quantum computing provides an alternative computational framework, a hybrid quantum-classical convolutional neural networks (QCCNNs) leverage high-dimensional Hilbert spaces and entanglement to surpass classical CNNs in image classification accuracy under comparable architectures. Despite performance improvements, QCCNNs typically use fixed quantum layers without incorporating trainable quantum parameters. This limits their ability to capture non-linear quantum representations and separates the model from the potential advantages of expressive quantum learning. In this work, we present a hybrid quantum-classical-quantum convolutional neural network (QCQ-CNN) that incorporates a quantum convolutional filter, a shallow classical CNN, and a trainable variational quantum classifier. This architecture aims to enhance the expressivity of decision boundaries in image classification tasks by introducing tunable quantum parameters into the end-to-end learning process. Through a series of small-sample experiments on MNIST, F-MNIST, and MRI tumor datasets, QCQ-CNN demonstrates competitive accuracy and convergence behavior compared to classical and hybrid baselines. We further analyze the effect of ansatz depth and find that moderate-depth quantum circuits can improve learning stability without introducing excessive complexity. Additionally, simulations incorporating depolarizing noise and finite sampling shots suggest that QCQ-CNN maintains a certain degree of robustness under realistic quantum noise conditions. While our results are currently limited to simulations with small-scale quantum circuits, the proposed approach offers a potentially promising direction for hybrid quantum learning in near-term applications.
Page 52 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.