Sort by:
Page 67 of 3463455 results

Towards Diagnostic Quality Flat-Panel Detector CT Imaging Using Diffusion Models

Hélène Corbaz, Anh Nguyen, Victor Schulze-Zachau, Paul Friedrich, Alicia Durrer, Florentin Bieder, Philippe C. Cattin, Marios N Psychogios

arxiv logopreprintAug 22 2025
Patients undergoing a mechanical thrombectomy procedure usually have a multi-detector CT (MDCT) scan before and after the intervention. The image quality of the flat panel detector CT (FDCT) present in the intervention room is generally much lower than that of a MDCT due to significant artifacts. However, using only FDCT images could improve patient management as the patient would not need to be moved to the MDCT room. Several studies have evaluated the potential use of FDCT imaging alone and the time that could be saved by acquiring the images before and/or after the intervention only with the FDCT. This study proposes using a denoising diffusion probabilistic model (DDPM) to improve the image quality of FDCT scans, making them comparable to MDCT scans. Clinicans evaluated FDCT, MDCT, and our model's predictions for diagnostic purposes using a questionnaire. The DDPM eliminated most artifacts and improved anatomical visibility without reducing bleeding detection, provided that the input FDCT image quality is not too low. Our code can be found on github.

Automatic analysis of negation cues and scopes for medical texts in French using language models.

Sadoune S, Richard A, Talbot F, Guyet T, Boussel L, Berry H

pubmed logopapersAug 22 2025
Correct automatic analysis of a medical report requires the identification of negations and their scopes. Since most of available training data comes from medical texts in English, it usually takes additional work to apply to non-English languages. Here, we introduce a supervised learning method for automatically identifying and determining the scopes and negation cues in French medical reports using language models based on BERT. Using a new private corpus of French-language chest CT scan reports with consistent annotation, we first fine-tuned five available transformer models on the negation cue and scope identification task. Subsequently, we extended the methodology by modifying the optimal model to encompass a wider range of clinical notes and reports (not limited to radiology reports) and more heterogeneous annotations. Lastly, we tested the generated model on its initial mask-filling task to ensure there is no catastrophic forgetting. On a corpus of thoracic CT scan reports annotated by four annotators within our team, our method reaches a F1-score of 99.4% for cue detection and 94.5% for scope detection, thus equaling or improving state-of-the art performance. On more generic biomedical reports, annotated with more heterogeneous rules, the quality of the automatic analysis of course decreases, but our best-of-the class model still delivers very good performance, with F1-scores of 98.2% (cue detection), and 90.9% (scope detection). Moreover, we show that fine-tuning the original model for the negation identification task preserves or even improves its performance on its initial fill-mask task, depending on the lemmatization. Considering the performance of our fine-tuned model for the detection of negation cues and scopes in medical reports in French and its robustness with respect to the diversity of the annotation rules and the type of biomedical data, we conclude that it is suited for use in a real-life clinical context.

Diagnostic performance of T1-Weighted MRI gray matter biomarkers in Parkinson's disease: A systematic review and meta-analysis.

Torres-Parga A, Gershanik O, Cardona S, Guerrero J, Gonzalez-Ojeda LM, Cardona JF

pubmed logopapersAug 22 2025
T1-weighted structural MRI has advanced our understanding of Parkinson's disease (PD), yet its diagnostic utility in clinical settings remains unclear. To assess the diagnostic performance of T1-weighted MRI gray matter (GM) metrics in distinguishing PD patients from healthy controls and to identify limitations affecting clinical applicability. A systematic review and meta-analysis were conducted on studies reporting sensitivity, specificity, or AUC for PD classification using T1-weighted MRI. Of 2906 screened records, 26 met inclusion criteria, and 10 provided sufficient data for quantitative synthesis. The risk of bias and heterogeneity were evaluated, and sensitivity analyses were performed by excluding influential studies. Pooled estimates showed a sensitivity of 0.71 (95 % CI: 0.70-0.72), specificity of 0.889 (95 % CI: 0.86-0.92), and overall accuracy of 0.909 (95 % CI: 0.89-0.93). These metrics improved after excluding outliers, reducing heterogeneity (I<sup>2</sup> = 95.7 %-0 %). Frequently reported regions showing structural alterations included the substantia nigra, striatum, thalamus, medial temporal cortex, and middle frontal gyrus. However, region-specific diagnostic metrics could not be consistently synthesized due to methodological variability. Machine learning approaches, particularly support vector machines and neural networks, showed enhanced performance with appropriate validation. T1-weighted MRI gray matter metrics demonstrate moderate accuracy in differentiating PD from controls but are not yet suitable as standalone diagnostic tools. Greater methodological standardization, external validation, and integration with clinical and biological data are needed to support precision neurology and clinical translation.

Linking morphometric variations in human cranial bone to mechanical behavior using machine learning.

Guo W, Bhagavathula KB, Adanty K, Rabey KN, Ouellet S, Romanyk DL, Westover L, Hogan JD

pubmed logopapersAug 22 2025
With the development of increasingly detailed imaging techniques, there is a need to update the methodology and evaluation criteria for bone analysis to understand the influence of bone microarchitecture on mechanical response. The present study aims to develop a machine learning-based approach to investigate the link between morphology of the human calvarium and its mechanical response under quasi-static uniaxial compression. Micro-computed tomography is used to capture the microstructure at a resolution of 18μm of male (n=5) and female (n=5) formalin-fixed calvarium specimens of the frontal and parietal regions. Image processing-based machine learning methods using convolutional neural networks are developed to isolate and calculate specific morphometric properties, such as porosity, trabecular thickness and trabecular spacing. Then, an ensemble method using a gradient boosted decision tree (XGBoost) is used to predict the mechanical strength based on the morphological results, and found that mean and minimum porosity at diploë are the most relevant factors for the mechanical strength of cranial bones under the studied conditions. Overall, this study provides new tools that can predict the mechanical response of human calvarium a priori. Besides, the quantitative morphology of the human calvarium can be used as input data in finite element models, as well as contributing to efforts in the development of cranial simulant materials.

Performance of chest X-ray with computer-aided detection powered by deep learning-based artificial intelligence for tuberculosis presumptive identification during case finding in the Philippines.

Marquez N, Carpio EJ, Santiago MR, Calderon J, Orillaza-Chi R, Salanap SS, Stevens L

pubmed logopapersAug 22 2025
The Philippines' high tuberculosis (TB) burden calls for effective point-of-care screening. Systematic TB case finding using chest X-ray (CXR) with computer-aided detection powered by deep learning-based artificial intelligence (AI-CAD) provided this opportunity. We aimed to comprehensively review AI-CAD's real-life performance in the local context to support refining its integration into the country's programmatic TB elimination efforts. Retrospective cross-sectional data analysis was done on case-finding activities conducted in four regions of the Philippines between May 2021 and March 2024. Individuals 15 years and older with complete CXR and molecular World Health Organization-recommended rapid diagnostic (mWRD) test results were included. TB presumptive was detected either by CXR or TB signs and symptoms and/or official radiologist readings. The overall diagnostic accuracy of CXR with AI-CAD, stratified by different factors, was assessed using a fixed abnormality threshold and mWRD as the standard reference. Given the imbalanced dataset, we evaluated both precision-recall (PRC) and receiver operating characteristic (ROC) plots. Due to limited verification of CAD-negative individuals, we used "pseudo-sensitivity" and "pseudo-specificity" to reflect estimates based on partial testing. We identified potential factors that may affect performance metrics. Using a 0.5 abnormality threshold in analyzing 5740 individuals, the AI-CAD model showed high pseudo-sensitivity at 95.6% (95% CI, 95.1-96.1) but low pseudo-specificity at 28.1% (26.9-29.2) and positive predictive value (PPV) at 18.4% (16.4-20.4). The area under the operating characteristic curve was 0.820, whereas the area under the precision-recall curve was 0.489. Pseudo-sensitivity was higher among males, younger individuals, and newly diagnosed TB. Threshold analysis revealed trade-offs, as increasing the threshold score to 0.68 saved more mWRD tests (42%) but led to an increase in missed cases (10%). Threshold adjustments affected PPV, tests saved, and case detection differently across settings. Scaling up AI-CAD use in TB screening to improve TB elimination efforts could be beneficial. There is a need to calibrate threshold scores based on resource availability, prevalence, and program goals. ROC and PRC plots, which specify PPV, could serve as valuable metrics for capturing the best estimate of model performance and cost-benefit ratios within the context-specific implementation of resource-limited settings.

Deep learning ensemble for abdominal aortic calcification scoring from lumbar spine X-ray and DXA images.

Voss A, Suoranta S, Nissinen T, Hurskainen O, Masarwah A, Sund R, Tohka J, Väänänen SP

pubmed logopapersAug 22 2025
Abdominal aortic calcification (AAC) is an independent predictor of cardiovascular diseases (CVDs). AAC is typically detected as an incidental finding in spine scans. Early detection of AAC through opportunistic screening using any available imaging modalities could help identify individuals with a higher risk of developing clinical CVDs. However, AAC is not routinely assessed in clinics, and manual scoring from projection images is time-consuming and prone to inter-rater variability. Also, automated AAC scoring methods exist, but earlier methods have not accounted for the inherent variability in AAC scoring and were developed for a single imaging modality at a time. We propose an automated method for quantifying AAC from lumbar spine X-ray and Dual-energy X-ray Absorptiometry (DXA) images using an ensemble of convolutional neural network models that predicts a distribution of probable AAC scores. We treat AAC score as a normally distributed random variable to account for the variability of manual scoring. The mean and variance of the assumed normal AAC distributions are estimated based on manual annotations, and the models in the ensemble are trained by simulating AAC scores from these distributions. Our proposed ensemble approach successfully extracted AAC scores from both X-ray and DXA images with predicted score distributions demonstrating strong agreement with manual annotations, as evidenced by concordance correlation coefficients of 0.930 for X-ray and 0.912 for DXA. The prediction error between the average estimates of our approach and the average manual annotations was lower than the errors reported previously, highlighting the benefit of incorporating uncertainty in AAC scoring.

Vision-Guided Surgical Navigation Using Computer Vision for Dynamic Intraoperative Imaging Updates.

Ruthberg J, Gunderson N, Chen P, Harris G, Case H, Bly R, Seibel EJ, Abuzeid WM

pubmed logopapersAug 22 2025
Residual disease after endoscopic sinus surgery (ESS) contributes to poor outcomes and revision surgery. Image-guided surgery systems cannot dynamically reflect intraoperative changes. We propose a sensorless, video-based method for intraoperative CT updating using neural radiance fields (NeRF), a deep learning algorithm used to create 3D surgical field reconstructions. Bilateral ESS was performed on three 3D-printed models (n = 6 sides). Postoperative endoscopic videos were processed through a custom NeRF pipeline to generate 3D reconstructions, which were co-registered to preoperative CT scans. Digitally updated CT models were created through algorithmic subtraction of resected regions, then volumetrically segmented, and compared to ground-truth postoperative CT. Accuracy was assessed using Hausdorff distance (surface alignment), Dice similarity coefficient (DSC) (volumetric overlap), and Bland‒Altman analysis (BAA) (statistical agreement). Comparison of the updated CT and the ground-truth postoperative CT indicated an average Hausdorff distance of 0.27 ± 0.076 mm and a 95th percentile Hausdorff distance of 0.82 ± 0.165 mm, indicating sub-millimeter surface alignment. The DSC was 0.93 ± 0.012 with values >0.9 suggestive of excellent spatial overlap. BAA indicated modest underestimation of volume on the updated CT versus ground-truth CT with a mean difference in volumes of 0.40 cm<sup>3</sup> with 95% limits of agreement of 0.04‒0.76 cm<sup>3</sup> indicating that all samples fell within acceptable bounds of variability. Computer vision can enable dynamic intraoperative imaging by generating highly accurate CT updates from monocular endoscopic video without external tracking. By directly visualizing resection progress, this software-driven tool has the potential to enhance surgical completeness in ESS for next-generation navigation platforms.

When Age Is More Than a Number: Acceleration of Brain Aging in Neurodegenerative Diseases.

Doering E, Hoenig MC, Cole JH, Drzezga A

pubmed logopapersAug 21 2025
Aging of the brain is characterized by deleterious processes at various levels including cellular/molecular and structural/functional changes. Many of these processes can be assessed in vivo by means of modern neuroimaging procedures, allowing the quantification of brain age in different modalities. Brain age can be measured by suitable machine learning strategies. The deviation (in both directions) between a person's measured brain age and chronologic age is referred to as the brain age gap (BAG). Although brain age, as defined by these methods, generally is related to the chronologic age of a person, this relationship is not always parallel and can also vary significantly between individuals. Importantly, whereas neurodegenerative disorders are not equivalent to accelerated brain aging, they may induce brain changes that resemble those of older adults, which can be captured by brain age models. Inversely, healthy brain aging may involve a resistance or delay of the onset of neurodegenerative pathologies in the brain. This continuing education article elaborates how the BAG can be computed and explores how BAGs, derived from diverse neuroimaging modalities, offer unique insights into the phenotypes of age-related neurodegenerative diseases. Structural BAGs from T1-weighted MRI have shown promise as phenotypic biomarkers for monitoring neurodegenerative disease progression especially in Alzheimer disease. Additionally, metabolic and molecular BAGs from molecular imaging, functional BAGs from functional MRI, and microstructural BAGs from diffusion MRI, although researched considerably less, each may provide distinct perspectives on particular brain aging processes and their deviations from healthy aging. We suggest that BAG estimation, when based on the appropriate modality, could potentially be useful for disease monitoring and offer interesting insights concerning the impact of therapeutic interventions.

Multimodal Integration in Health Care: Development With Applications in Disease Management.

Hao Y, Cheng C, Li J, Li H, Di X, Zeng X, Jin S, Han X, Liu C, Wang Q, Luo B, Zeng X, Li K

pubmed logopapersAug 21 2025
Multimodal data integration has emerged as a transformative approach in the health care sector, systematically combining complementary biological and clinical data sources such as genomics, medical imaging, electronic health records, and wearable device outputs. This approach provides a multidimensional perspective of patient health that enhances the diagnosis, treatment, and management of various medical conditions. This viewpoint presents an overview of the current state of multimodal integration in health care, spanning clinical applications, current challenges, and future directions. We focus primarily on its applications across different disease domains, particularly in oncology and ophthalmology. Other diseases are briefly discussed due to the few available literature. In oncology, the integration of multimodal data enables more precise tumor characterization and personalized treatment plans. Multimodal fusion demonstrates accurate prediction of anti-human epidermal growth factor receptor 2 therapy response (area under the curve=0.91). In ophthalmology, multimodal integration through the combination of genetic and imaging data facilitates the early diagnosis of retinal diseases. However, substantial challenges remain regarding data standardization, model deployment, and model interpretability. We also highlight the future directions of multimodal integration, including its expanded disease applications, such as neurological and otolaryngological diseases, and the trend toward large-scale multimodal models, which enhance accuracy. Overall, the innovative potential of multimodal integration is expected to further revolutionize the health care industry, providing more comprehensive and personalized solutions for disease management.

Ascending Aortic Dimensions and Body Size: Allometric Scaling, Normative Values, and Prognostic Performance.

Tavolinejad H, Beeche C, Dib MJ, Pourmussa B, Damrauer SM, DePaolo J, Azzo JD, Salman O, Duda J, Gee J, Kun S, Witschey WR, Chirinos JA

pubmed logopapersAug 21 2025
Ascending aortic (AscAo) dimensions partially depend on body size. Ratiometric (linear) indexing of AscAo dimensions to height and body surface area (BSA) are currently recommended, but it is unclear whether these allometric relationships are indeed linear. This study aimed to evaluate allometric relations, normative values, and the prognostic performance of AscAo dimension indices. We studied UK Biobank (UKB) (n = 49,271) and Penn Medicine BioBank (PMBB) (n = 8,426) participants. A convolutional neural network was used to segment the thoracic aorta from available magnetic resonance and computed tomography thoracic images. Normal allometric exponents of AscAo dimensions were derived from log-log models among healthy reference subgroups. Prognostic associations of AscAo dimensions were assessed with the use of Cox models. Among reference subgroups of both UKB (n = 11,310; age 52 ± 8 years; 37% male) and PMBB (n = 799; age 50 ± 16 years; 41% male), diameter/height, diameter/BSA, and area/BSA exhibited highly nonlinear relationships. In contrast, the allometric exponent of the area/height index was close to unity (UKB: 1.04; PMBB: 1.13). Accordingly, the linear ratio of area/height index did not exhibit residual associations with height (UKB: R<sup>2</sup> = 0.04 [P = 0.411]; PMBB: R<sup>2</sup> = 0.08 [P = 0.759]). Across quintiles of height and BSA, area/height was the only ratiometric index that consistently classified aortic dilation, whereas all other indices systematically underestimated or overestimated AscAo dilation at the extremes of body size. Area/height was robustly associated with thoracic aorta events in the UKB (HR: 3.73; P < 0.001) and the PMBB (HR: 1.83; P < 0.001). Among AscAo indices, area/height was allometrically correct, did not exhibit residual associations with body size, and was consistently associated with adverse events.
Page 67 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.