Sort by:
Page 139 of 3563559 results

Wall Shear Stress Estimation in Abdominal Aortic Aneurysms: Towards Generalisable Neural Surrogate Models

Patryk Rygiel, Julian Suk, Christoph Brune, Kak Khee Yeung, Jelmer M. Wolterink

arxiv logopreprintJul 30 2025
Abdominal aortic aneurysms (AAAs) are pathologic dilatations of the abdominal aorta posing a high fatality risk upon rupture. Studying AAA progression and rupture risk often involves in-silico blood flow modelling with computational fluid dynamics (CFD) and extraction of hemodynamic factors like time-averaged wall shear stress (TAWSS) or oscillatory shear index (OSI). However, CFD simulations are known to be computationally demanding. Hence, in recent years, geometric deep learning methods, operating directly on 3D shapes, have been proposed as compelling surrogates, estimating hemodynamic parameters in just a few seconds. In this work, we propose a geometric deep learning approach to estimating hemodynamics in AAA patients, and study its generalisability to common factors of real-world variation. We propose an E(3)-equivariant deep learning model utilising novel robust geometrical descriptors and projective geometric algebra. Our model is trained to estimate transient WSS using a dataset of CT scans of 100 AAA patients, from which lumen geometries are extracted and reference CFD simulations with varying boundary conditions are obtained. Results show that the model generalizes well within the distribution, as well as to the external test set. Moreover, the model can accurately estimate hemodynamics across geometry remodelling and changes in boundary conditions. Furthermore, we find that a trained model can be applied to different artery tree topologies, where new and unseen branches are added during inference. Finally, we find that the model is to a large extent agnostic to mesh resolution. These results show the accuracy and generalisation of the proposed model, and highlight its potential to contribute to hemodynamic parameter estimation in clinical practice.

Applications of artificial intelligence and advanced imaging in pediatric diffuse midline glioma.

Haddadi Avval A, Banerjee S, Zielke J, Kann BH, Mueller S, Rauschecker AM

pubmed logopapersJul 30 2025
Diffuse midline glioma (DMG) is a rare, aggressive, and fatal tumor that largely occurs in the pediatric population. To improve outcomes, it is important to characterize DMGs, which can be performed via magnetic resonance imaging (MRI) assessment. Recently, artificial intelligence (AI) and advanced imaging have demonstrated their potential to improve the evaluation of various brain tumors, gleaning more information from imaging data than is possible without these methods. This narrative review compiles the existing literature on the intersection of MRI-based AI use and DMG tumors. The applications of AI in DMG revolve around classification and diagnosis, segmentation, radiogenomics, and prognosis/survival prediction. Currently published articles have utilized a wide spectrum of AI algorithms, from traditional machine learning and radiomics to neural networks. Challenges include the lack of cohorts of DMG patients with publicly available, multi-institutional, multimodal imaging and genomics datasets as well as the overall rarity of the disease. As an adjunct to AI, advanced MRI techniques, including diffusion-weighted imaging, perfusion-weighted imaging, and Magnetic Resonance Spectroscopy (MRS), as well as positron emission tomography (PET), provide additional insights into DMGs. Establishing AI models in conjunction with advanced imaging modalities has the potential to push clinical practice toward precision medicine.

A privacy preserving machine learning framework for medical image analysis using quantized fully connected neural networks with TFHE based inference.

Selvakumar S, Senthilkumar B

pubmed logopapersJul 30 2025
Medical image analysis using deep learning algorithms has become a basis of modern healthcare, enabling early detection, diagnosis, treatment planning, and disease monitoring. However, sharing sensitive raw medical data with third parties for analysis raises significant privacy concerns. This paper presents a privacy-preserving machine learning (PPML) framework using a Fully Connected Neural Network (FCNN) for secure medical image analysis using the MedMNIST dataset. The proposed PPML framework leverages a torus-based fully homomorphic encryption (TFHE) to ensure data privacy during inference, maintain patient confidentiality, and ensure compliance with privacy regulations. The FCNN model is trained in a plaintext environment for FHE compatibility using Quantization-Aware Training to optimize weights and activations. The quantized FCNN model is then validated under FHE constraints through simulation and compiled into an FHE-compatible circuit for encrypted inference on sensitive data. The proposed framework is evaluated on the MedMNIST datasets to assess its accuracy and inference time in both plaintext and encrypted environments. Experimental results reveal that the PPML framework achieves a prediction accuracy of 88.2% in the plaintext setting and 87.5% during encrypted inference, with an average inference time of 150 milliseconds per image. This shows that FCNN models paired with TFHE-based encryption achieve high prediction accuracy on MedMNIST datasets with minimal performance degradation compared to unencrypted inference.

Trabecular bone analysis: ultra-high-resolution CT goes far beyond high-resolution CT and gets closer to micro-CT (a study using Canon Medical CT devices).

Gillet R, Puel U, Amer A, Doyen M, Boubaker F, Assabah B, Hossu G, Gillet P, Blum A, Teixeira PAG

pubmed logopapersJul 30 2025
High-resolution CT (HR-CT) cannot image trabecular bone due to insufficient spatial resolution. Ultra-high-resolution CT may be a valuable alternative. We aimed to describe the accuracy of Canon Medical HR, super-high-resolution (SHR), and ultra-high-resolution (UHR)-CT in measuring trabecular bone microarchitectural parameters using micro-CT as a reference. Sixteen cadaveric distal tibial epiphyses were enrolled in this pre-clinical study. Images were acquired with HR-CT (i.e., 0.5 mm slice thickness/512<sup>2</sup> matrix) and SHR-CT (i.e., 0.25 mm slice thickness and 1024<sup>2</sup> matrix) with and without deep learning reconstruction (DLR) and UHR-CT (i.e., 0.25 mm slice thickness/2048<sup>2</sup> matrix) without DLR. Trabecular bone parameters were compared. Trabecular thickness was closest with UHR-CT but remained 1.37 times that of micro-CT (P < 0.001). With SHR-CT without and with DLR, it was 1.75 and 1.79 times that of micro-CT, respectively (P < 0.001), and 3.58 and 3.68 times that of micro-CT with HR-CT without and with DLR, respectively (P < 0.001). Trabecular separation was 0.7 times that of micro-CT with UHR-CT (P < 0.001), 0.93 and 0.94 times that of micro-CT with SHR-CT without and with DLR (P = 0.36 and 0.79, respectively), and 1.52 and 1.36 times that of micro-CT with HR-CT without and with DLR (P < 0.001). Bone volume/total volume was overestimated (i.e., 1.66 to 1.92 times that of micro-CT) by all techniques (P < 0.001). However, HR-CT values were superior to UHR-CT values (P = 0.03 and 0.01, without and with DLR, respectively). UHR and SHR-CT were the closest techniques to micro-CT and surpassed HR-CT.

Deep learning-driven brain tumor classification and segmentation using non-contrast MRI.

Lu NH, Huang YH, Liu KY, Chen TB

pubmed logopapersJul 30 2025
This study aims to enhance the accuracy and efficiency of MRI-based brain tumor diagnosis by leveraging deep learning (DL) techniques applied to multichannel MRI inputs. MRI data were collected from 203 subjects, including 100 normal cases and 103 cases with 13 distinct brain tumor types. Non-contrast T1-weighted (T1w) and T2-weighted (T2w) images were combined with their average to form RGB three-channel inputs, enriching the representation for model training. Several convolutional neural network (CNN) architectures were evaluated for tumor classification, while fully convolutional networks (FCNs) were employed for tumor segmentation. Standard preprocessing, normalization, and training procedures were rigorously followed. The RGB fusion of T1w, T2w, and their average significantly enhanced model performance. The classification task achieved a top accuracy of 98.3% using the Darknet53 model, and segmentation attained a mean Dice score of 0.937 with ResNet50. These results demonstrate the effectiveness of multichannel input fusion and model selection in improving brain tumor analysis. While not yet integrated into clinical workflows, this approach holds promise for future development of DL-assisted decision-support tools in radiological practice.

MitoStructSeg: mitochondrial structural complexity resolution via adaptive learning for cross-sample morphometric profiling

Wang, X., Wan, X., Cai, B., Jia, Z., Chen, Y., Guo, S., Liu, Z., Zhang, F., Hu, B.

biorxiv logopreprintJul 30 2025
Mitochondrial morphology and structural changes are closely associated with metabolic dysfunction and disease progression. However, the structural complexity of mitochondria presents a major challenge for accurate segmentation and analysis. Most existing methods focus on delineating entire mitochondria but lack the capability to resolve fine internal features, particularly cristae. In this study, we introduce MitoStructSeg, a deep learning-based framework for mitochondrial structure segmentation and quantitative analysis. The core of MitoStructSeg is AMM-Seg, a novel model that integrates domain adaptation to improve cross-sample generalization, dual-channel feature fusion to enhance structural detail extraction, and continuity learning to preserve spatial coherence. This architecture enables accurate segmentation of both mitochondrial membranes and intricately folded cristae. MitoStructSeg further incorporates a quantitative analysis module that extracts key morphological metrics, including surface area, volume, and cristae density, allowing comprehensive and scalable assessment of mitochondrial morphology. The effectiveness of our approach has been validated on both human myocardial tissue and mouse kidney tissue, demonstrating its robustness in accurately segmenting mitochondria with diverse morphologies. In addition, we provide an open source, user-friendly tool to ensure practical usability.

Use of imaging biomarkers and ambulatory functional endpoints in Duchenne muscular dystrophy clinical trials: Systematic review and machine learning-driven trend analysis.

Todd M, Kang S, Wu S, Adhin D, Yoon DY, Willcocks R, Kim S

pubmed logopapersJul 29 2025
Duchenne muscular dystrophy (DMD) is a rare X-linked genetic muscle disorder affecting primarily pediatric males and leading to limited life expectancy. This systematic review of 85 DMD trials and non-interventional studies (2010-2022) evaluated how magnetic resonance imaging biomarkers-particularly fat fraction and T2 relaxation time-are currently being used to quantitatively track disease progression and how their use compares to traditional mobility-based functional endpoints. Imaging biomarker studies lasted on average 4.50 years, approximately 11 months longer than those using only ambulatory functional endpoints. While 93% of biologic intervention trials (n = 28) included ambulatory functional endpoints, only 13.3% (n = 4) incorporated imaging biomarkers. Small molecule trials and natural history studies were the predominant contributors to imaging biomarker use, each comprising 30.4% of such studies. Small molecule trials used imaging biomarkers more frequently than biologic trials, likely because biologics often target dystrophin, an established surrogate biomarker, while small molecules lack regulatory-approved biomarkers. Notably, following the 2018 FDA guidance finalization, we observed a significant decrease in new trials using imaging biomarkers despite earlier regulatory encouragement. This analysis demonstrates that while imaging biomarkers are increasingly used in natural history studies, their integration into interventional trials remains limited. From XGBoost machine learning analysis, trial duration and start year were the strongest predictors of biomarker usage, with a decline observed following the 2018 FDA guidance. Despite their potential to objectively track disease progression, imaging biomarkers have not yet been widely adopted as primary endpoints in therapeutic trials, likely due to regulatory and logistical challenges. Future research should examine whether standardizing imaging protocols or integrating hybrid endpoint models could bridge the regulatory gap currently limiting biomarker adoption in therapeutic trials.

Cardiac-CLIP: A Vision-Language Foundation Model for 3D Cardiac CT Images

Yutao Hu, Ying Zheng, Shumei Miao, Xiaolei Zhang, Jiahao Xia, Yaolei Qi, Yiyang Zhang, Yuting He, Qian Chen, Jing Ye, Hongyan Qiao, Xiuhua Hu, Lei Xu, Jiayin Zhang, Hui Liu, Minwen Zheng, Yining Wang, Daimin Zhang, Ji Zhang, Wenqi Shao, Yun Liu, Longjiang Zhang, Guanyu Yang

arxiv logopreprintJul 29 2025
Foundation models have demonstrated remarkable potential in medical domain. However, their application to complex cardiovascular diagnostics remains underexplored. In this paper, we present Cardiac-CLIP, a multi-modal foundation model designed for 3D cardiac CT images. Cardiac-CLIP is developed through a two-stage pre-training strategy. The first stage employs a 3D masked autoencoder (MAE) to perform self-supervised representation learning from large-scale unlabeled volumetric data, enabling the visual encoder to capture rich anatomical and contextual features. In the second stage, contrastive learning is introduced to align visual and textual representations, facilitating cross-modal understanding. To support the pre-training, we collect 16641 real clinical CT scans, supplemented by 114k publicly available data. Meanwhile, we standardize free-text radiology reports into unified templates and construct the pathology vectors according to diagnostic attributes, based on which the soft-label matrix is generated to supervise the contrastive learning process. On the other hand, to comprehensively evaluate the effectiveness of Cardiac-CLIP, we collect 6,722 real-clinical data from 12 independent institutions, along with the open-source data to construct the evaluation dataset. Specifically, Cardiac-CLIP is comprehensively evaluated across multiple tasks, including cardiovascular abnormality classification, information retrieval and clinical analysis. Experimental results demonstrate that Cardiac-CLIP achieves state-of-the-art performance across various downstream tasks in both internal and external data. Particularly, Cardiac-CLIP exhibits great effectiveness in supporting complex clinical tasks such as the prospective prediction of acute coronary syndrome, which is notoriously difficult in real-world scenarios.

Time-series X-ray image prediction of dental skeleton treatment progress via neural networks.

Kwon SW, Moon JK, Song SC, Cha JY, Kim YW, Choi YJ, Lee JS

pubmed logopapersJul 29 2025
Accurate prediction of skeletal changes during orthodontic treatment in growing patients remains challenging due to significant individual variability in craniofacial growth and treatment responses. Conventional methods, such as support vector regression and multilayer perceptrons, require multiple sequential radiographs to achieve acceptable accuracy. However, they are limited by increased radiation exposure, susceptibility to landmark identification errors, and the lack of visually interpretable predictions. To overcome these limitations, this study explored advanced generative approaches, including denoising diffusion probabilistic models (DDPMs), latent diffusion models (LDMs), and ControlNet, to predict future cephalometric radiographs using minimal input data. We evaluated three diffusion-based models-a DDPM utilizing three sequential cephalometric images (3-input DDPM), a single-image DDPM (1-input DDPM), and a single-image LDM-and a vision-based generative model, ControlNet, conditioned on patient-specific attributes such as age, sex, and orthodontic treatment type. Quantitative evaluations demonstrated that the 3-input DDPM achieved the highest numerical accuracy, whereas the single-image LDM delivered comparable predictive performance with significantly reduced clinical requirements. ControlNet also exhibited competitive accuracy, highlighting its potential effectiveness in clinical scenarios. These findings indicate that the single-image LDM and ControlNet offer practical solutions for personalized orthodontic treatment planning, reducing patient visits and radiation exposure while maintaining robust predictive accuracy.

Deep sensorless tracking of ultrasound probe orientation during freehand transperineal biopsy with spatial context for symmetry disambiguation.

Soormally C, Beitone C, Troccaz J, Voros S

pubmed logopapersJul 29 2025
Diagnosis of prostate cancer requires histopathology of tissue samples. Following an MRI to identify suspicious areas, a biopsy is performed under ultrasound (US) guidance. In existing assistance systems, 3D US information is generally available (taken before the biopsy session and/or in between samplings). However, without registration between 2D images and 3D volumes, the urologist must rely on cognitive navigation. This work introduces a deep learning model to track the orientation of real-time US slices relative to a reference 3D US volume using only image and volume data. The dataset comprises 515 3D US volumes collected from 51 patients during routine transperineal biopsy. To generate 2D images streams, volumes are resampled to simulate three degrees of freedom rotational movements around the rectal entrance. The proposed model comprises two ResNet-based sub-modules to address the symmetry ambiguity arising from complex out-of-plane movement of the probe. The first sub-module predicts the unsigned relative orientation between consecutive slices, while the second leverages a custom similarity model and a spatial context volume to determine the sign of this relative orientation. From the sub-modules predictions, slices orientations along the navigated trajectory can then be derived in real-time. Results demonstrate that registration error remains below 2.5 mm in 92% of cases over a 5-second trajectory, and 80% over a 25-second trajectory. These findings show that accurate, sensorless 2D/3D US registration given a spatial context is achievable with limited drift over extended navigation. This highlights the potential of AI-driven biopsy assistance to increase the accuracy of freehand biopsy.
Page 139 of 3563559 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.