Sort by:
Page 149 of 3993984 results

Physics consistent machine learning framework for inverse modeling with applications to ICF capsule implosions.

Serino DA, Bell E, Klasky M, Southworth BS, Nadiga B, Wilcox T, Korobkin O

pubmed logopapersJul 17 2025
In high energy density physics (HEDP) and inertial confinement fusion (ICF), predictive modeling is complicated by uncertainty in parameters that characterize various aspects of the modeled system, such as those characterizing material properties, equation of state (EOS), opacities, and initial conditions. Typically, however, these parameters are not directly observable. What is observed instead is a time sequence of radiographic projections using X-rays. In this work, we define a set of sparse hydrodynamic features derived from the outgoing shock profile and outer material edge, which can be obtained from radiographic measurements, to directly infer such parameters. Our machine learning (ML)-based methodology involves a pipeline of two architectures, a radiograph-to-features network (R2FNet) and a features-to-parameters network (F2PNet), that are trained independently and later combined to approximate a posterior distribution for the parameters from radiographs. We show that the machine learning architectures are able to accurately infer initial conditions and EOS parameters, and that the estimated parameters can be used in a hydrodynamics code to obtain density fields, shocks, and material interfaces that satisfy thermodynamic and hydrodynamic consistency. Finally, we demonstrate that features resulting from an unknown EOS model can be successfully mapped onto parameters of a chosen analytical EOS model, implying that network predictions are learning physics, with a degree of invariance to the underlying choice of EOS model. To the best of our knowledge, our framework is the first demonstration of recovering both thermodynamic and hydrodynamic consistent density fields from noisy radiographs.

An AI method to predict pregnancy loss by extracting biological indicators from embryo ultrasound recordings in early pregnancy.

Liu L, Zang Y, Zheng H, Li S, Song Y, Feng X, Zhang X, Li Y, Cao L, Zhou G, Dong T, Huang Q, Pan T, Deng J, Cheng D

pubmed logopapersJul 17 2025
B-ultrasound results are widely used in early pregnancy loss (EPL) prediction, but there are inevitable intra-observer and inter-observer errors in B-ultrasound results especially in early pregnancy, which lead to inconsistent assessment of embryonic status, and thus affect the judgment of EPL. To address this, we need a rapid and accurate model to predict pregnancy loss in the first trimester. This study aimed to construct an artificial intelligence model to automatically extract biometric parameters from ultrasound videos of early embryos and predict pregnancy loss. This can effectively eliminate the measurement error of B-ultrasound results, accurately predict EPL, and provide decision support for doctors with relatively little clinical experience. A total of 630 ultrasound videos from women with early singleton pregnancies of gestational age between 6 and 10 weeks were used for training. A two-stage artificial intelligence model was established. First, some biometric parameters such as gestational sac areas (GSA), yolk sac diameter (YSD), crown rump length (CRL) and fetal heart rate (FHR), were extract from ultrasound videos by a deep neural network named A3F-net, which is a modified neural network based on U-Net designed by ourselves. Then an ensemble learning model predicted pregnancy loss risk based on these features. Dice, IOU and Precision were used to evaluate the measurement results, and sensitivity, AUC etc. were used to evaluate the predict results. The fetal heart rate was compared with those measured by doctors, and the accuracy of results was compared with other AI models. In the biometric features measurement stage, the precision of GSA, YSD and CRL of A3F-net were 98.64%, 96.94% and 92.83%, it was the highest compared to other 2 models. Bland-Altman analysis did not show systematic deviations between doctors and AI. The mean and standard deviation of the mean relative error between doctors and the AI model was 0.060 ± 0.057. In the EPL prediction stage, the ensemble learning models demonstrated excellent performance, with CatBoost being the best-performing model, achieving a precision of 98.0% and an AUC of 0.969 (95% CI: 0.962-0.975). In this study, a hybrid AI model to predict EPL was established. First, a deep neural network automatically measured the biometric parameters from ultrasound video to ensure the consistency and accuracy of the measurements, then a machine learning model predicted EPL risk to support doctors making decisions. The use of our established AI model in EPL prediction has the potential to assist physicians in making more accurate and timely clinical decision in clinical application.

2D-3D deformable image registration of histology slide and micro-CT with DISA-based initialization.

Chen J, Ronchetti M, Stehl V, Nguyen V, Kallaa MA, Gedara MT, Lölkes C, Moser S, Seidl M, Wieczorek M

pubmed logopapersJul 17 2025
Recent developments in the registration of histology and micro-computed tomography (µCT) have broadened the perspective of pathological applications such as virtual histology based on µCT. This topic remains challenging because of the low image quality of soft tissue CT. Additionally, soft tissue samples usually deform during the histology slide preparation, making it difficult to correlate the structures between the histology slide and µCT. In this work, we propose a novel 2D-3D multi-modal deformable image registration method. The method utilizes an initial global 2D-3D registration using an ML-based differentiable similarity measure. The registration is then finalized by an analytical out-of-plane deformation refinement. The method is evaluated on datasets acquired from tonsil and tumor tissues. µCTs of both phase-contrast and conventional absorption modalities are investigated. The registration results from the proposed method are compared with those from intensity- and keypoint-based methods. The comparison is conducted using both visual and fiducial-based evaluations. The proposed method demonstrates superior performance compared to the other two methods.

Opportunistic computed tomography (CT) assessment of osteoporosis in patients undergoing transcatheter aortic valve replacement (TAVR).

Paukovitsch M, Fechner T, Felbel D, Moerike J, Rottbauer W, Klömpken S, Brunner H, Kloth C, Beer M, Sekuboyina A, Buckert D, Kirschke JS, Sollmann N

pubmed logopapersJul 17 2025
CT-based opportunistic screening using artificial intelligence finds a high prevalence (43%) of osteoporosis in CT scans obtained for planning of transcatheter aortic valve replacement. Thus, opportunistic screening may be a cost-effective way to assess osteoporosis in high-risk populations. Osteoporosis is an underdiagnosed condition associated with fractures and frailty, but may be detected in routine computed tomography (CT) scans. Volumetric bone mineral density (vBMD) was measured in clinical routine thoraco-abdominal CT scans of 207 patients for planning of transcatheter aortic valve replacement (TAVR) using an artificial intelligence (AI)-based algorithm. 43% of patients had osteoporosis (vBMD < 80 mg/cm<sup>3</sup> L1-L3) and were elderly (83.0 {interquartile range [IQR]: 78.0-85.5} vs. 79.0 {IQR: 71.8-84.0} years, p < 0.001), more often female (55.1 vs. 28.8%, p < 0.001), and had a higher Society of Thoracic Surgeon's score for mortality (3.0 {IQR:1.8-4.6} vs. 2.1 {IQR: 1.4-3.2}%, p < 0.001). In addition to lumbar vBMD (58.2 ± 14.7 vs. 106 ± 21.4 mg/cm<sup>3</sup>, p < 0.001), thoracic vBMD (79.5 ± 17.9 vs. 127.4 ± 26.0 mg/cm<sup>3</sup>, p < 0.001) was also significantly reduced in these patients and showed high diagnostic accuracy for osteoporosis assessment (area under curve: 0.96, p < 0.001). Osteoporotic patients were significantly more often at risk for falls (40.4 vs. 22.9%, p = 0.007) and required help in activities of daily life (ADL) more frequently (48.3 vs. 33.1%, p = 0.026), while direct-to-home discharges were fewer (88.8 vs. 96.6%, p = 0.026). In-hospital bleeding complications (3.4 vs. 5.1%), stroke (1.1 vs. 2.5%), and death (1.1 vs. 0.8%) were equally low, while in-hospital device success was equally high (94.4 vs. 94.9%, p > 0.05 for all comparisons). However, one-year probability of survival was significantly lower (84.0 vs. 98.2%, log-rank p < 0.01). Applying an AI-based algorithm to TAVR planning CT scans can reveal a high rate of 43% patients having osteoporosis. Osteoporosis may represent a marker related to frailty and worsened outcome in TAVR patients.

The application of super-resolution ultrasound radiomics models in predicting the failure of conservative treatment for ectopic pregnancy.

Zhang M, Sheng J

pubmed logopapersJul 17 2025
Conservative treatment remains a viable option for selected patients with ectopic pregnancy (EP), but failure may lead to rupture and serious complications. Currently, serum β-hCG is the main predictor for treatment outcomes, yet its accuracy is limited. This study aimed to develop and validate a predictive model that integrates radiomic features derived from super-resolution (SR) ultrasound images with clinical biomarkers to improve risk stratification. A total of 228 patients with EP receiving conservative treatment were retrospectively included, with 169 classified as treatment success and 59 as failure. SR images were generated using a deep learning-based generative adversarial network (GAN). Radiomic features were extracted from both normal-resolution (NR) and SR ultrasound images. Features with intraclass correlation coefficient (ICC) ≥ 0.75 were retained after intra- and inter-observer evaluation. Feature selection involved statistical testing and Least Absolute Shrinkage and Selection Operator (LASSO) regression. Random forest algorithms were used to construct NR and SR models. A clinical model based on serum β-hCG was also developed. The Clin-SR model was constructed by fusing SR radiomics with β-hCG values. Model performance was evaluated using area under the curve (AUC), calibration, and decision curve analysis (DCA). An independent temporal validation cohort (n = 40; 20 failures, 20 successes) was used to validation of the nomogram derived from the Clin-SR model. The SR model significantly outperformed the NR model in the test cohort (AUC: 0.791 ± 0.015 vs. 0.629 ± 0.083). In a representative iteration, the Clin-SR fusion model achieved an AUC of 0.870 ± 0.015, with good calibration and net clinical benefit, suggesting reliable performance in predicting conservative treatment failure. In the independent validation cohort, the nomogram demonstrated good generalizability with an AUC of 0.808 and consistent calibration across risk thresholds. Key contributing radiomic features included Gray Level Variance and Voxel Volume, reflecting lesion heterogeneity and size. The Clin-SR model, which integrates deep learning-enhanced SR ultrasound radiomics with serum β-hCG, offers a robust and non-invasive tool for predicting conservative treatment failure in ectopic pregnancy. This multimodal approach enhances early risk stratification and supports personalized clinical decision-making, potentially reducing overtreatment and emergency interventions.

Characterizing structure-function coupling in subjective memory complaints of preclinical Alzheimer's disease.

Wei C, Wang J, Xue Y, Jiang J, Cao M, Li S, Chen X

pubmed logopapersJul 17 2025
BackgroundSubjective cognitive decline (SCD) is recognized as an early phase in the progression of Alzheimer's disease (AD).ObjectiveTo explore the abnormal patterns of morphological and functional connectivity coupling (MC-FC coupling) and their potential diagnostic significance in SCD.MethodsThe data of 52 individuals with SCD and 51 age-gender-education matched healthy controls (HC) who underwent resting-state functional magnetic resonance imaging and high-resolution 3D T<sub>1</sub>-weighted imaging were retrieved to build the MC and FC of gray matter. Support vector machine (SVM) methods were used for differentiating between SCD and HC.ResultsSCD individuals exhibited MC-FC decoupling in the frontoparietal network compared with HC (p = 0.002, 5000 permutations). Using these adjusted MC-FC coupling metrics, SVM analysis achieved 74.76% accuracy, 64.71% sensitivity, and 92.31% specificity (p < 0.001, 5000 permutations). Additionally, the stronger MC-FC coupling of the left inferior temporal gyrus (r = 0.294, p = 0.034) and right posterior cingulate gyrus (r = 0.372, p = 0.007) in SCD individuals was positively correlated with subjective memory complaint performance.ConclusionsThe findings of this study provide insight into the idiosyncratic feature of brain organization underlying SCD from the prospective of MC-FC coupling and highlight the potential of MC-FC coupling for the identification of the preclinical stage of AD.

Insights into a radiology-specialised multimodal large language model with sparse autoencoders

Kenza Bouzid, Shruthi Bannur, Daniel Coelho de Castro, Anton Schwaighofer, Javier Alvarez-Valle, Stephanie L. Hyland

arxiv logopreprintJul 17 2025
Interpretability can improve the safety, transparency and trust of AI models, which is especially important in healthcare applications where decisions often carry significant consequences. Mechanistic interpretability, particularly through the use of sparse autoencoders (SAEs), offers a promising approach for uncovering human-interpretable features within large transformer-based models. In this study, we apply Matryoshka-SAE to the radiology-specialised multimodal large language model, MAIRA-2, to interpret its internal representations. Using large-scale automated interpretability of the SAE features, we identify a range of clinically relevant concepts - including medical devices (e.g., line and tube placements, pacemaker presence), pathologies such as pleural effusion and cardiomegaly, longitudinal changes and textual features. We further examine the influence of these features on model behaviour through steering, demonstrating directional control over generations with mixed success. Our results reveal practical and methodological challenges, yet they offer initial insights into the internal concepts learned by MAIRA-2 - marking a step toward deeper mechanistic understanding and interpretability of a radiology-adapted multimodal large language model, and paving the way for improved model transparency. We release the trained SAEs and interpretations: https://huggingface.co/microsoft/maira-2-sae.

From Variability To Accuracy: Conditional Bernoulli Diffusion Models with Consensus-Driven Correction for Thin Structure Segmentation

Jinseo An, Min Jin Lee, Kyu Won Shim, Helen Hong

arxiv logopreprintJul 17 2025
Accurate segmentation of orbital bones in facial computed tomography (CT) images is essential for the creation of customized implants for reconstruction of defected orbital bones, particularly challenging due to the ambiguous boundaries and thin structures such as the orbital medial wall and orbital floor. In these ambiguous regions, existing segmentation approaches often output disconnected or under-segmented results. We propose a novel framework that corrects segmentation results by leveraging consensus from multiple diffusion model outputs. Our approach employs a conditional Bernoulli diffusion model trained on diverse annotation patterns per image to generate multiple plausible segmentations, followed by a consensus-driven correction that incorporates position proximity, consensus level, and gradient direction similarity to correct challenging regions. Experimental results demonstrate that our method outperforms existing methods, significantly improving recall in ambiguous regions while preserving the continuity of thin structures. Furthermore, our method automates the manual process of segmentation result correction and can be applied to image-guided surgical planning and surgery.

Deep Learning-Based Fetal Lung Segmentation from Diffusion-weighted MRI Images and Lung Maturity Evaluation for Fetal Growth Restriction

Zhennan Xiao, Katharine Brudkiewicz, Zhen Yuan, Rosalind Aughwane, Magdalena Sokolska, Joanna Chappell, Trevor Gaunt, Anna L. David, Andrew P. King, Andrew Melbourne

arxiv logopreprintJul 17 2025
Fetal lung maturity is a critical indicator for predicting neonatal outcomes and the need for post-natal intervention, especially for pregnancies affected by fetal growth restriction. Intra-voxel incoherent motion analysis has shown promising results for non-invasive assessment of fetal lung development, but its reliance on manual segmentation is time-consuming, thus limiting its clinical applicability. In this work, we present an automated lung maturity evaluation pipeline for diffusion-weighted magnetic resonance images that consists of a deep learning-based fetal lung segmentation model and a model-fitting lung maturity assessment. A 3D nnU-Net model was trained on manually segmented images selected from the baseline frames of 4D diffusion-weighted MRI scans. The segmentation model demonstrated robust performance, yielding a mean Dice coefficient of 82.14%. Next, voxel-wise model fitting was performed based on both the nnU-Net-predicted and manual lung segmentations to quantify IVIM parameters reflecting tissue microstructure and perfusion. The results suggested no differences between the two. Our work shows that a fully automated pipeline is possible for supporting fetal lung maturity assessment and clinical decision-making.

Pixel Perfect MegaMed: A Megapixel-Scale Vision-Language Foundation Model for Generating High Resolution Medical Images

Zahra TehraniNasab, Amar Kumar, Tal Arbel

arxiv logopreprintJul 17 2025
Medical image synthesis presents unique challenges due to the inherent complexity and high-resolution details required in clinical contexts. Traditional generative architectures such as Generative Adversarial Networks (GANs) or Variational Auto Encoder (VAEs) have shown great promise for high-resolution image generation but struggle with preserving fine-grained details that are key for accurate diagnosis. To address this issue, we introduce Pixel Perfect MegaMed, the first vision-language foundation model to synthesize images at resolutions of 1024x1024. Our method deploys a multi-scale transformer architecture designed specifically for ultra-high resolution medical image generation, enabling the preservation of both global anatomical context and local image-level details. By leveraging vision-language alignment techniques tailored to medical terminology and imaging modalities, Pixel Perfect MegaMed bridges the gap between textual descriptions and visual representations at unprecedented resolution levels. We apply our model to the CheXpert dataset and demonstrate its ability to generate clinically faithful chest X-rays from text prompts. Beyond visual quality, these high-resolution synthetic images prove valuable for downstream tasks such as classification, showing measurable performance gains when used for data augmentation, particularly in low-data regimes. Our code is accessible through the project website - https://tehraninasab.github.io/pixelperfect-megamed.
Page 149 of 3993984 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.