Sort by:
Page 26 of 99986 results

MedImg: An Integrated Database for Public Medical Image.

Zhong B, Fan R, Ma Y, Ji X, Cui Q, Cui C

pubmed logopapersAug 20 2025
The advancements in deep learning algorithms for medical image analysis have garnered significant attention in recent years. While several studies show promising results, with models achieving or even surpassing human performance, translating these advancements into clinical practice is still accompanied by various challenges. A primary obstacle lies in the availability of large-scale, well-characterized datasets for validating the generalization of approaches. To address this challenge, we curated a diverse collection of medical image datasets from multiple public sources, containing 105 datasets and a total of 1,995,671 images. These images span 14 modalities, including X-ray, computed tomography, magnetic resonance imaging, optical coherence tomography, ultrasound, and endoscopy, and originate from 13 organs, such as the lung, brain, eye, and heart. Subsequently, we constructed an online database, MedImg, which incorporates and systematically organizes these medical images to facilitate data accessibility. MedImg serves as an intuitive and open-access platform for facilitating research in deep learning-based medical image analysis, accessible at https://www.cuilab.cn/medimg/.

[Digital and intelligent medicine empowering precision abdominal surgery:today and the future].

Dong Q, Wang JM, Xiu WL

pubmed logopapersAug 20 2025
The complex anatomical structure of abdominal organs demands high precision in surgical procedures, which also increases postoperative complication risks. Advancements in digital medicine have created new opportunities for precision surgery. This article summarizes the current applications of digital intelligence in precision abdominal surgery. The processing and real-time monitoring technologies of medical imaging provide powerful tools for accurate diagnosis and treatment. Meanwhile, big data analysis and precise classification capabilities of artificial intelligence further enhance diagnostic efficiency and safety. Additionally, the paper analyzes the advantages and limitations of digital intelligence in empowering precision abdominal surgery, while exploring future development directions.

TCFNet: Bidirectional face-bone transformation via a Transformer-based coarse-to-fine point movement network

Runshi Zhang, Bimeng Jie, Yang He, Junchen Wang

arxiv logopreprintAug 20 2025
Computer-aided surgical simulation is a critical component of orthognathic surgical planning, where accurately simulating face-bone shape transformations is significant. The traditional biomechanical simulation methods are limited by their computational time consumption levels, labor-intensive data processing strategies and low accuracy. Recently, deep learning-based simulation methods have been proposed to view this problem as a point-to-point transformation between skeletal and facial point clouds. However, these approaches cannot process large-scale points, have limited receptive fields that lead to noisy points, and employ complex preprocessing and postprocessing operations based on registration. These shortcomings limit the performance and widespread applicability of such methods. Therefore, we propose a Transformer-based coarse-to-fine point movement network (TCFNet) to learn unique, complicated correspondences at the patch and point levels for dense face-bone point cloud transformations. This end-to-end framework adopts a Transformer-based network and a local information aggregation network (LIA-Net) in the first and second stages, respectively, which reinforce each other to generate precise point movement paths. LIA-Net can effectively compensate for the neighborhood precision loss of the Transformer-based network by modeling local geometric structures (edges, orientations and relative position features). The previous global features are employed to guide the local displacement using a gated recurrent unit. Inspired by deformable medical image registration, we propose an auxiliary loss that can utilize expert knowledge for reconstructing critical organs.Compared with the existing state-of-the-art (SOTA) methods on gathered datasets, TCFNet achieves outstanding evaluation metrics and visualization results. The code is available at https://github.com/Runshi-Zhang/TCFNet.

AI-assisted 3D versus conventional 2D preoperative planning in total hip arthroplasty for Crowe type II-IV high hip dislocation: a two-year retrospective study.

Lu Z, Yuan C, Xu Q, Feng Y, Xia Q, Wang X, Zhu J, Wu J, Wang T, Chen J, Wang X, Wang Q

pubmed logopapersAug 20 2025
With the growing complexity of total hip arthroplasty (THA) for high hip dislocation (HHD), artificial intelligence (AI)-assisted three-dimensional (3D) preoperative planning has emerged as a promising tool to enhance surgical accuracy. This study compared clinical outcomes of AI-assisted 3D versus conventional two-dimensional (2D) X-ray preoperative planning in such cases. A retrospective cohort of 92 patients with Crowe type II-IV HHD who underwent THA between May 2020 and January 2023 was analyzed. Patients received either AI-assisted 3D preoperative planning (n = 49) or 2D X-ray preoperative planning (n = 43). The primary outcome was the accuracy of implant size prediction. Secondary outcomes included operative time, blood loss, leg length discrepancy (LLD), implant positioning, functional scores (Harris Hip Score [HHS], WOMAC, VAS), complications, and implant survival at 24 months. At 24 months, both groups demonstrated significant improvements in functional outcomes. Compared to the 2D X-ray group, the AI-3D group showed higher accuracy in implant size prediction (acetabular cup: 59.18% vs. 30.23%; femoral stem: 65.31% vs. 41.86%; both p < 0.05), a greater proportion of cups placed within the Lewinnek and Callanan safe zones (p < 0.05), shorter operative time, reduced intraoperative blood loss, and more effective correction of leg length discrepancy (all p < 0.05). No significant differences were observed in HHS, WOMAC, or VAS scores between groups at 24 months (all p > 0.05). Implant survivorship was also comparable (100% vs. 97.7%; p = 0.283), with one revision noted in the 2D X-ray group. AI-assisted 3D preoperative planning improves prosthesis selection accuracy, implant positioning, and perioperative outcomes in Crowe type II-IV HHD THA, although 2-year functional and survival outcomes were comparable to 2D X-ray preoperative planning. Considering the higher cost, radiation exposure, and workflow complexity, its broader application warrants further investigation, particularly in identifying patients who may benefit most.

Machine learning-assisted radiogenomic analysis for miR-15a expression prediction in renal cell carcinoma.

Mytsyk Y, Kowal P, Kobilnyk Y, Lesny M, Skrzypczyk M, Stroj D, Dosenko V, Kucheruk O

pubmed logopapersAug 20 2025
Renal cell carcinoma (RCC) is a prevalent malignancy with highly variable outcomes. MicroRNA-15a (miR-15a) has emerged as a promising prognostic biomarker in RCC, linked to angiogenesis, apoptosis, and proliferation. Radiogenomics integrates radiological features with molecular data to non-invasively predict biomarkers, offering valuable insights for precision medicine. This study aimed to develop a machine learning-assisted radiogenomic model to predict miR-15a expression in RCC. A retrospective analysis was conducted on 64 RCC patients who underwent preoperative multiphase contrast-enhanced CT or MRI. Radiological features, including tumor size, necrosis, and nodular enhancement, were evaluated. MiR-15a expression was quantified using real-time qPCR from archived tissue samples. Polynomial regression and Random Forest models were employed for prediction, and hierarchical clustering with K-means analysis was used for phenotypic stratification. Statistical significance was assessed using non-parametric tests and machine learning performance metrics. Tumor size was the strongest radiological predictor of miR-15a expression (adjusted R<sup>2</sup> = 0.8281, p < 0.001). High miR-15a levels correlated with aggressive features, including necrosis and nodular enhancement (p < 0.05), while lower levels were associated with cystic components and macroscopic fat. The Random Forest regression model explained 65.8% of the variance in miR-15a expression (R<sup>2</sup> = 0.658). For classification, the Random Forest classifier demonstrated exceptional performance, achieving an AUC of 1.0, a precision of 1.0, a recall of 0.9, and an F1-score of 0.95. Hierarchical clustering effectively segregated tumors into aggressive and indolent phenotypes, consistent with clinical expectations. Radiogenomic analysis using machine learning provides a robust, non-invasive approach to predicting miR-15a expression, enabling enhanced tumor stratification and personalized RCC management. These findings underscore the clinical utility of integrating radiological and molecular data, paving the way for broader adoption of precision medicine in oncology.

Review of GPU-based Monte Carlo simulation platforms for transmission and emission tomography in medicine.

Chi Y, Schubert KE, Badal A, Roncali E

pubmed logopapersAug 20 2025
Monte Carlo (MC) simulation remains the gold standard for modeling complex physical interactions in transmission and emission tomography, with GPU parallel computing offering unmatched computational performance and enabling practical, large-scale MC applications. In recent years, rapid advancements in both GPU technologies and tomography techniques have been observed. Harnessing emerging GPU capabilities to accelerate MC simulation and strengthen its role in supporting the rapid growth of medical tomography has become an important topic. To provide useful insights, we conducted a comprehensive review of state-of-the-art GPU-accelerated MC simulations in tomography, highlighting current achievements and underdeveloped areas.&#xD;&#xD;Approach: We reviewed key technical developments across major tomography modalities, including computed tomography (CT), cone-beam CT (CBCT), positron emission tomography, single-photon emission computed tomography, proton CT, emerging techniques, and hybrid modalities. We examined MC simulation methods and major CPU-based MC platforms that have historically supported medical imaging development, followed by a review of GPU acceleration strategies, hardware evolutions, and leading GPU-based MC simulation packages. Future development directions were also discussed.&#xD;&#xD;Main Results: Significant advancements have been achieved in both tomography and MC simulation technologies over the past half-century. The introduction of GPUs has enabled speedups often exceeding 100-1000 times over CPU implementations, providing essential support to the development of new imaging systems. Emerging GPU features like ray-tracing cores, tensor cores, and GPU-execution-friendly transport methods offer further opportunities for performance enhancement. &#xD;&#xD;Significance: GPU-based MC simulation is expected to remain essential in advancing medical emission and transmission tomography. With the emergence of new concepts such as training Machine Learning with synthetic data, Digital Twins for Healthcare, and Virtual Clinical Trials, improving hardware portability and modularizing GPU-based MC codes to adapt to these evolving simulation needs represent important future research directions. This review aims to provide useful insights for researchers, developers, and practitioners in relevant fields.

A machine learning-based decision support tool for standardizing intracavitary versus interstitial brachytherapy technique selection in high-dose-rate cervical cancer.

Kajikawa T, Masui K, Sakai K, Takenaka T, Suzuki G, Yoshino Y, Nemoto H, Yamazaki H, Yamada K

pubmed logopapersAug 20 2025
To develop and evaluate a machine-learning (ML) decision-support tool that standardizes selection of intracavitary brachytherapy (ICBT) versus hybrid intracavitary/interstitial brachytherapy (IC/ISBT) in high-dose-rate (HDR) cervical cancer. We retrospectively analyzed 159 HDR brachytherapy plans from 50 consecutive patients treated between April 2022 and June 2024. Brachytherapy techniques (ICBT or IC/ISBT) were determined by an experienced radiation oncologist using CT/MRI-based 3-D image-guided brachytherapy. For each plan, 144 shape- and distance-based geometric features describing the high-risk clinical target volume (HR-CTV), bladder, rectum, and applicator were extracted. Nested five-fold cross-validation combined minimum-redundancy-maximum-relevance feature selection with five classifiers (k-nearest neighbors, logistic regression, naïve Bayes, random forest, support-vector classifier) and two voting ensembles (hard and soft voting). Model performance was benchmarked against single-factor rules (HR-CTV > 30 cm³; maximum lateral HR-CTV-tandem distance > 25 mm). Logistic regression achieved the highest test accuracy 0.849 ± 0.023 and a mean area-under-the-curve (AUC) 0.903 ± 0.033, outperforming the volume rule and matching the distance rule's AUC 0.907 ± 0.057 while providing greater accuracy 0.805 ± 0.114. These differences were not statistically significant. Feature-importance analysis showed that the maximum HR-CTV-tandem lateral distance and the bladder's minimal short-axis length consistently dominated model decisions.​ CONCLUSIONS: A compact ML tool using two readily measurable geometric features can reliably assist clinicians in choosing between ICBT and IC/ISBT, thereby reducing inter-physician variability and promoting standardized HDR cervical brachytherapy technique selection.

UNICON: UNIfied CONtinual Learning for Medical Foundational Models

Mohammad Areeb Qazi, Munachiso S Nwadike, Ibrahim Almakky, Mohammad Yaqub, Numan Saeed

arxiv logopreprintAug 19 2025
Foundational models are trained on extensive datasets to capture the general trends of a domain. However, in medical imaging, the scarcity of data makes pre-training for every domain, modality, or task challenging. Continual learning offers a solution by fine-tuning a model sequentially on different domains or tasks, enabling it to integrate new knowledge without requiring large datasets for each training phase. In this paper, we propose UNIfied CONtinual Learning for Medical Foundational Models (UNICON), a framework that enables the seamless adaptation of foundation models to diverse domains, tasks, and modalities. Unlike conventional adaptation methods that treat these changes in isolation, UNICON provides a unified, perpetually expandable framework. Through careful integration, we show that foundation models can dynamically expand across imaging modalities, anatomical regions, and clinical objectives without catastrophic forgetting or task interference. Empirically, we validate our approach by adapting a chest CT foundation model initially trained for classification to a prognosis and segmentation task. Our results show improved performance across both additional tasks. Furthermore, we continually incorporated PET scans and achieved a 5\% improvement in Dice score compared to respective baselines. These findings establish that foundation models are not inherently constrained to their initial training scope but can evolve, paving the way toward generalist AI models for medical imaging.

Multimodal imaging deep learning model for predicting extraprostatic extension in prostate cancer using MpMRI and 18 F-PSMA-PET/CT.

Yao F, Lin H, Xue YN, Zhuang YD, Bian SY, Zhang YY, Yang YJ, Pan KH

pubmed logopapersAug 19 2025
This study aimed to construct a multimodal imaging deep learning (DL) model integrating mpMRI and <sup>18</sup>F-PSMA-PET/CT for the prediction of extraprostatic extension (EPE) in prostate cancer, and to assess its effectiveness in enhancing the diagnostic accuracy of radiologists. Clinical and imaging data were retrospectively collected from patients with pathologically confirmed prostate cancer (PCa) who underwent radical prostatectomy (RP). Data were collected from a primary institution (Center 1, n = 197) between January 2019 and June 2022 and an external institution (Center 2, n = 36) between July 2021 and November 2022. A multimodal DL model incorporating mpMRI and <sup>18</sup>F-PSMA-PET/CT was developed to support radiologists in assessing EPE using the EPE-grade scoring system. The predictive performance of the DL model was compared with that of single-modality models, as well as with radiologist assessments with and without model assistance. Clinical net benefit of the model was also assessed. For patients in Center 1, the area under the curve (AUC) for predicting EPE was 0.76 (0.72-0.80), 0.77 (0.70-0.82), and 0.82 (0.78-0.87) for the mpMRI-based DL model, PET/CT-based DL model, and the combined mpMRI + PET/CT multimodal DL model, respectively. In the external test set (Center 2), the AUCs for these models were 0.75 (0.60-0.88), 0.77 (0.72-0.88), and 0.81 (0.63-0.97), respectively. The multimodal DL model demonstrated superior predictive accuracy compared to single-modality models in both internal and external validations. The deep learning-assisted EPE-grade scoring model significantly improved AUC and sensitivity compared to radiologist EPE-grade scoring alone (P < 0.05), with a modest reduction in specificity. Additionally, the deep learning-assisted scoring model provided greater clinical net benefit than the radiologist EPE-grade score used by radiologists alone. The multimodal imaging deep learning model, integrating mpMRI and 18 F-PSMA PET/CT, demonstrates promising predictive performance for EPE in prostate cancer and enhances the accuracy of radiologists in EPE assessment. The model holds potential as a supportive tool for more individualized and precise therapeutic decision-making.

TME-guided deep learning predicts chemotherapy and immunotherapy response in gastric cancer with attention-enhanced residual Swin Transformer.

Sang S, Sun Z, Zheng W, Wang W, Islam MT, Chen Y, Yuan Q, Cheng C, Xi S, Han Z, Zhang T, Wu L, Li W, Xie J, Feng W, Chen Y, Xiong W, Yu J, Li G, Li Z, Jiang Y

pubmed logopapersAug 19 2025
Adjuvant chemotherapy and immune checkpoint blockade exert quite durable anti-tumor responses, but the lack of effective biomarkers limits the therapeutic benefits. Utilizing multi-cohorts of 3,095 patients with gastric cancer, we propose an attention-enhanced residual Swin Transformer network to predict chemotherapy response (main task), and two predicting subtasks (ImmunoScore and periostin [POSTN]) are used as intermediate tasks to improve the model's performance. Furthermore, we assess whether the model can identify which patients would benefit from immunotherapy. The deep learning model achieves high accuracy in predicting chemotherapy response and the tumor microenvironment (ImmunoScore and POSTN). We further find that the model can identify which patient may benefit from checkpoint blockade immunotherapy. This approach offers precise chemotherapy and immunotherapy response predictions, opening avenues for personalized treatment options. Prospective studies are warranted to validate its clinical utility.
Page 26 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.