Sort by:
Page 194 of 6526512 results

Zeosk, M., Kun, E., Reddy, S., Pandey, D., Xu, L., Wang, J. Y., Li, C., Gray, R. S., Wise, C. A., Otomo, N., Narasimhan, V. M.

medrxiv logopreprintSep 5 2025
Scoliosis is the most common developmental spinal deformity, but its genetic underpinnings remain only partially understood. To enhance the identification of scoliosis-related loci, we utilized whole body dual energy X-ray absorptiometry (DXA) scans from 57,887 individuals in the UK Biobank (UKB), and quantified spine curvature by applying deep learning models to segment then landmark vertebrae to measure the cumulative horizontal displacement of the spine from a central axis. On a subset of 120 individuals, our automated image-derived curvature measurements showed a correlation 0.92 with clinical Cobb angle assessments, supporting their validity as a proxy for scoliosis severity. To connect spinal curvature with its genetic basis we conducted a genome-wide association study (GWAS). Our quantitative imaging phenotype allowed us to identify 2 novel loci associated with scoliosis in a European population not seen in previous GWAS. These loci are in the gene SEM1/SHFM1 as well as on a lncRNA on chr 3 that is downstream of EDEM1 and upstream of GRM7. Genetic correlation analysis revealed significant overlap between our image-based GWAS and ICD-10 based GWAS in both the UKB and Biobank of Japan. We also showed that our quantitative GWAS had more statistical power to identify new loci than a case-control dataset with an order of magnitude larger sample size. Increased spine curvature was also associated with increased leg length discrepancy, reduced muscle strength and decreased bone density, and increased incidence of knee but not hip osteoarthritis. Our results illustrate the potential of using quantitative imaging phenotypes to uncover genetic associations that are challenging to capture with medical records alone and identify new loci for functional follow-up.

Wu J, Xu W, Li L, Xie W, Tang B

pubmed logopapersSep 4 2025
<b><i>Background:</i></b> Oncologic emergencies in critically ill cancer patients frequently require rapid, real-time assessment of tumor responses to therapeutic interventions. However, conventional imaging modalities such as computed tomography and magnetic resonance imaging are often impractical in intensive care units (ICUs) due to logistical constraints and patient instability. Super-resolution ultrasound (SR-US) imaging has emerged as a promising noninvasive alternative, facilitating bedside evaluation of tumor microvascular dynamics with exceptional spatial resolution. This study assessed the clinical utility of real-time SR-US imaging in monitoring tumor perfusion changes during emergency management in oncological ICU settings. <b><i>Methods:</i></b> In this prospective observational study, critically ill patients with oncologic emergencies underwent bedside SR-US imaging before and after the initiation of emergency therapy (e.g., corticosteroids, decompression, or chemotherapy). SR-US was employed to quantify microvascular parameters, including perfusion density and flow heterogeneity. Data processing incorporated artificial intelligence for real-time vessel segmentation and quantitative analysis. <b><i>Results:</i></b> SR-US imaging successfully detected perfusion changes within hours of therapy initiation. A significant correlation was observed between reduced tumor perfusion and clinical improvement, including symptom relief and shorter ICU stay. This technology enables visualization of microvessels as small as 30 µm, surpassing conventional ultrasound limits. No adverse events were reported with the use of contrast microbubbles. In addition, SR-US imaging reduces the need for transportation to radiology departments, thereby optimizing ICU workflow. <b><i>Conclusions:</i></b> Real-time SR-US imaging offers a novel, bedside-compatible method for evaluating tumor vascular response during the acute phase of oncological emergencies. Its integration into ICU care pathways could enhance timely decision-making, reduce reliance on static imaging, and support personalized cancer management. Further multicenter validation is required.

Li X, Hu Z, Wang C, Cao S, Zhang C

pubmed logopapersSep 4 2025
Technological innovations in robot-assisted ultrasound (RAUS) have remarkably advanced the development of precision and intelligent medical imaging diagnosis. This study aims to use bibliometric methods to systematically analyze the technological evolution and research frontiers in the RAUS field, providing valuable insights for future research. This study used the Web of Science Core Collection database to retrieve English-language research papers and reviews related to RAUS published between 2000 and 2024. Using analytical tools such as R (with the Bibliometrix package), VOSviewer, and CiteSpace, the study conducted a bibliometric analysis from multiple angles, including literature distribution, collaboration networks, and knowledge clustering. The visualization of analysis results comprehensively revealed the hot topics and emerging research frontiers within the RAUS field. The results reveal an exponential growth trend in RAUS research, with China leading in publication output (accounting for 28.51% of total publications), while the USA leads in terms of citation impact and international collaboration networks. Institutions such as Johns Hopkins University and Chinese Academy of Sciences emerge as highly productive core contributors. The research field has formed a multidimensional interdisciplinary landscape encompassing "mathematical sciences-engineering technology-medical health." The focus is on the integration of artificial intelligence (AI) and its clinical application translation. From 2000 to 2014, the development of "mobile robots" laid the cornerstone for further advancements. From 2015 to 2018, research focused on the development of "surgery" and "tumors" for medical applications. From 2019 to 2024, the core focus will be on "medical robots and systems," "artificial intelligence" and "robotic ultrasound," highlighting the transformation of technology into an AI-driven model. This study systematically reviewed the development of RAUS through bibliometric methods, enriching academic understanding of the field and providing valuable guidance for future technological iterations, clinical translation, and global cooperation to ultimately achieve precision medicine and balanced medical resources.

Yu X, Wu Q, Qin W, Zhong T, Su M, Ma J, Zhang Y, Ji X, Wang W, Quan G, Du Y, Chen Y, Lai X

pubmed logopapersSep 4 2025
Photon-counting computed tomography (PCCT) based on photon-counting detectors (PCDs) represents a cutting-edge CT technology, offering higher spatial resolution, reduced radiation dose, and advanced material decomposition capabilities. Accurately modeling complex and nonlinear PCDs under limited calibration data becomes one of the challenges hindering the widespread accessibility of PCCT. This paper introduces a physics-ASIC architecture-driven deep learning detector model for PCDs. This model adeptly captures the comprehensive response of the PCD, encompassing both sensor and ASIC responses. We present experimental results demonstrating the model's exceptional accuracy and robustness with limited calibration data. Key advancements include reduced calibration errors, reasonable physics-ASIC parameters estimation, and high-quality and high-accuracy material decomposition images.

Chan TJ, Nair-Kanneganti A, Anthony B, Pouch A

pubmed logopapersSep 4 2025
Diagnostic ultrasound has long filled a crucial niche in medical imaging thanks to its portability, affordability, and favorable safety profile. Now, multi-view hardware and deep-learning-based image reconstruction algorithms promise to extend this niche to increasingly sophisticated applications, such as volume rendering and long-term organ monitoring. However, progress on these fronts is impeded by the complexities of ultrasound electronics and by the scarcity of high-fidelity radiofrequency data. Evidently, there is a critical need for tools that enable rapid ultrasound prototyping and generation of synthetic data. We meet this need with MUSiK, the first open-source ultrasound simulation library expressly designed for multi-view acoustic simulations of realistic anatomy. This library covers the full gamut of image acquisition: building anatomical digital phantoms, defining and positioning diverse transducer types, running simulations, and reconstructing images. In this paper, we demonstrate several use cases for MUSiK. We simulate in vitro multi-view experiments and compare the resolution and contrast of the resulting images. We then perform multiple conventional and experimental in vivo imaging tasks, such as 2D scans of the kidney, 2D and 3D echocardiography, 2.5D tomography of large regions, and 3D tomography for lesion detection in soft tissue. Finally, we introduce MUSiK's Bayesian reconstruction framework for multi-view ultrasound and validate an original SNR-enhancing reconstruction algorithm. We anticipate that these unique features will seed new hypotheses and accelerate the overall pace of ultrasound technological development. The MUSiK library is publicly available at github.com/norway99/MUSiK.

Liu Y, Wang E, Gong M, Tao B, Wu Y, Qi X, Chen X

pubmed logopapersSep 4 2025
Accurate preoperative planning for dental implants, especially in edentulous or partially edentulous patients, relies on precise localization of radiographic templates that guide implant positioning. By wearing a patientspecific radiographic template, clinicians can better assess anatomical constraints and plan optimal implant paths. However, due to the low radiopacity of such templates, their spatial position is difficult to determine directly from cone-beam computed tomography (CBCT) scans. To overcome this limitation, high-resolution optical scans of the templates are acquired, providing detailed geometric information for accurate spatial registration. This paper proposes a geometric-driven cross-modal registration framework that aligns the optical scan model of the radiographic template with patient CBCT data, enhancing registration accuracy through geometric feature extraction such as curvature and occlusal contours. A hybrid deep learning workflow further improves robustness, achieving a root mean square error (RMSE) of 1.68mm and mean absolute error (MAE) of 1.25mm. The system also incorporates augmented reality (AR) for real-time surgical navigation. Clinical and phantom experiments validate its effectiveness in supporting precise implant path planning and execution. Our proposed system enhances the efficiency and safety of dental implant surgery by integrating geometric feature extraction, deep learning-based registration, and AR-assisted navigation.

Hecht EM, Hu HH, Serai SD, Wu HH, Brunsing RL, Guimaraes AR, Kurugol S, Ringe KI, Syed AB

pubmed logopapersSep 4 2025
In March of 2025, 145 attendees convened at the Hub for Clinical Collaboration of the Children's Hospital of Philadelphia for the inaugural International Society for Magnetic Resonance in Medicine (ISMRM) Body MRI Study Group workshop entitled "Body MRI: Unsolved Problems and Unmet Needs." Approximately 24% of the attendees were MD or MD/PhD's, 45% were PhD's, and 30% were early-career trainees and postdoctoral associates. Among the invited speakers and moderators, 28% were from outside the United States, with a 40:60% female-to-male ratio. The 2.5-day program brought together a multidisciplinary group of scientists, radiologists, technologists, and trainees. Session topics included quantitative imaging biomarkers, low- and high-field strengths, artifact and motion correction, rapid imaging and focused protocols, and artificial intelligence. Another key session focused on the importance of team science and allowed speakers from academia and industry to share their personal experiences and offer advice on how to successfully translate new MRI technology into clinical practice. This article summarizes key points from the event and perceived unmet clinical needs within the field of body MRI.

Gulamali F, Sawant AS, Liharska L, Horowitz C, Chan L, Hofer I, Singh K, Richardson L, Mensah E, Charney A, Reich D, Hu J, Nadkarni G

pubmed logopapersSep 4 2025
The growing adoption of diagnostic and prognostic algorithms in health care has led to concerns about the perpetuation of algorithmic bias against disadvantaged groups of individuals. Deep learning methods to detect and mitigate bias have revolved around modifying models, optimization strategies, and threshold calibration with varying levels of success and tradeoffs. However, there have been limited substantive efforts to address bias at the level of the data used to generate algorithms in health care datasets. The aim of this study is to create a simple metric (AEquity) that uses a learning curve approximation to distinguish and mitigate bias via guided dataset collection or relabeling. We demonstrate this metric in 2 well-known examples, chest X-rays and health care cost utilization, and detect novel biases in the National Health and Nutrition Examination Survey. We demonstrated that using AEquity to guide data-centric collection for each diagnostic finding in the chest radiograph dataset decreased bias by between 29% and 96.5% when measured by differences in area under the curve. Next, we wanted to examine (1) whether AEquity worked on intersectional populations and (2) if AEquity is invariant to different types of fairness metrics, not just area under the curve. Subsequently, we examined the effect of AEquity on mitigating bias when measured by false negative rate, precision, and false discovery rate for Black patients on Medicaid. When we examined Black patients on Medicaid, at the intersection of race and socioeconomic status, we found that AEquity-based interventions reduced bias across a number of different fairness metrics including overall false negative rate by 33.3% (bias reduction absolute=1.88×10-1, 95% CI 1.4×10-1 to 2.5×10-1; bias reduction of 33.3%, 95% CI 26.6%-40%; precision bias by 7.50×10-2, 95% CI 7.48×10-2 to 7.51×10-2; bias reduction of 94.6%, 95% CI 94.5%-94.7%; false discovery rate by 94.5%; absolute bias reduction=3.50×10-2, 95% CI 3.49×10-2 to 3.50×10-2). Similarly, AEquity-guided data collection demonstrated bias reduction of up to 80% on mortality prediction with the National Health and Nutrition Examination Survey (bias reduction absolute=0.08, 95% CI 0.07-0.09). Then, we wanted to compare AEquity to state-of-the-art data-guided debiasing measures such as balanced empirical risk minimization and calibration. Consequently, we benchmarked against balanced empirical risk minimization and calibration and showed that AEquity-guided data collection outperforms both standard approaches. Moreover, we demonstrated that AEquity works on fully connected networks; convolutional neural networks such as ResNet-50; transformer architectures such as VIT-B-16, a vision transformer with 86 million parameters; and nonparametric methods such as Light Gradient-Boosting Machine. In short, we demonstrated that AEquity is a robust tool by applying it to different datasets, algorithms, and intersectional analyses and measuring its effectiveness with respect to a range of traditional fairness metrics.

Hu T, Cai Y, Zhou T, Zhang Y, Huang K, Huang X, Qian S, Wang Q, Luo D

pubmed logopapersSep 4 2025
A predictive model of cervical lymph node metastasis and metastasis volume was constructed based on a machine learning algorithm and ultrasound characteristics before surgery. A retrospective analysis was conducted on 573 cases of PTC patients who underwent surgery in our institution, from 2017 to 2022. Patient demographic and clinical characteristics were systematically collected. Feature selection was performed using univariate analysis, Logistic regression (LR) analysis. Statistically significant variables were identified using a threshold of p < 0.05. Predictive models for cervical lymph node metastasis and metastatic volume in papillary thyroid carcinoma were constructed using advanced machine learning algorithms: K-Nearest Neighbors (KNN), Gradient Boosting Machine (XGBoost), and Support Vector Machine (SVM). Model performance was rigorously assessed using validation cohort data, evaluating area under the Receiver Operating Characteristic (ROC) curve, sensitivity, specificity, and accuracy. In this retrospective study of 573 patients (320 had lymph node metastasis, 127 had small volume lymph node metastasis, and 193 had medium-volume lymph node metastasis). In the model predicting the neck lymph node metastasis, the Gradient Boosting method exhibited the best performance, with an area under the ROC curve of 0.784, sensitivity of 76.2%, specificity of 70.6%, and accuracy of 73.8%. In the model predicting the metastatic volume in neck lymph nodes for PTC, the Gradient Boosting method also demonstrated the best performance, with an area under the ROC curve of 0.779, sensitivity of 71.7%, specificity of 75.9%, and accuracy of 74.4%. Machine learning-based predictive models integrating preoperative ultrasound features demonstrate robust performance in stratifying neck lymph node metastasis risk for PTC patients. These models optimize surgical planning by guiding lymph node dissection extent and individualizing treatment strategies, potentially reducing unnecessary extensive surgeries. The integration of advanced computational techniques with clinical imaging provides a data-driven paradigm for preoperative risk assessment in thyroid oncology.

Alvaro Aranibar Roque, Helga Sebastian

arxiv logopreprintSep 4 2025
Pneumothorax, the abnormal accumulation of air in the pleural space, can be life-threatening if undetected. Chest X-rays are the first-line diagnostic tool, but small cases may be subtle. We propose an automated deep-learning pipeline using a U-Net with an EfficientNet-B4 encoder to segment pneumothorax regions. Trained on the SIIM-ACR dataset with data augmentation and a combined binary cross-entropy plus Dice loss, the model achieved an IoU of 0.7008 and Dice score of 0.8241 on the independent PTX-498 dataset. These results demonstrate that the model can accurately localize pneumothoraces and support radiologists.
Page 194 of 6526512 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.