Sort by:
Page 244 of 6576562 results

Zhang Z, Hides JA, De Martino E, Millner J, Tuxworth G

pubmed logopapersAug 20 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Chronic low back pain is a global health issue with considerable socioeconomic burdens and is associated with changes in lumbar paraspinal muscles (LPM). In this retrospective study, a deep learning method was trained and externally validated for automated LPM segmentation, muscle volume quantification, and fatty infiltration assessment across multisequence MRIs. A total of 1,302 MRIs from 641 participants across five centers were included. Data from two centers were used for model training and tuning, while data from the remaining three centers were used for external testing. Model segmentation performance was evaluated against manual segmentation using the Dice similarity coefficient (DSC), and measurement accuracy was assessed using two one-sided tests and Intraclass Correlation Coefficients (ICCs). The model achieved global DSC values of 0.98 on the internal test set and 0.93 to 0.97 on external test sets. Statistical equivalence between automated and manual measurements of muscle volume and fat ratio was confirmed in most regions (<i>P</i> < .05). Agreement between automated and manual measurements was high (ICCs > 0.92). In conclusion, the proposed automated method accurately segmented LPM and demonstrated statistical equivalence to manual measurements of muscle volume and fatty infiltration ratio across multisequence, multicenter MRIs. ©RSNA, 2025.

Sasson I, Sorin V, Ziv-Baran T, Marom EM, Czerniawski E, Adam SZ, Aviram G

pubmed logopapersAug 20 2025
Pulmonary embolism is commonly associated with deep vein thrombosis and the components of Virchow's triad: hypercoagulability, stasis, and endothelial injury. High-risk patients are traditionally those with prolonged immobility and hypercoagulability. Recent findings of pulmonary thrombosis (PT) in healthy combat soldiers, found on CT performed for initial trauma assessment, challenge this assumption. The aim of this study was to investigate the prevalence and characteristics of PT detected in acute traumatic war injuries, and evaluate the effectiveness of an artificial intelligence (AI) algorithm in these settings. This retrospective study analyzed immediate post-trauma CT scans of war-injured patients aged 18-45, from two tertiary hospitals between October 7, 2023, and January 7, 2024. Thrombi were retrospectively detected using AI software and confirmed by two senior radiologists. Findings were compared to the original reports. Clinical and injury-related data were analyzed. Of 190 patients (median age 24, IQR (21.0-30.0), 183 males), AI identified 10 confirmed PT patients (5.6%), six (60%) of whom were not originally diagnosed. The only statistically significant difference between PT and non-PT patients was increased complexity and severity of injuries (higher Injury Severity Score, median (IQR) 21.0 (20.0-21.0) vs 9.0 (4.0-14.5), p = 0.01, accordingly). Despite the presence of thrombi, significant right ventricular dilatation was absent in all patients. This report of early PT in war-injured patients provides a unique opportunity to characterize these findings. PT occurs more frequently than anticipated, without clinical suspicion, highlighting the need for improved radiologists' awareness and the crucial role of AI systems as diagnostic support tools. Question What is the prevalence, and what are the radiological characteristics of arterial clotting within the pulmonary arteries in young acute trauma patients? Findings A surprisingly high occurrence of PT with a high rate of missed diagnoses by radiologists. All cases did not presented right ventricular dysfunction. Clinical relevance PT is a distinct clinical entity separate from traditional venous thromboembolism, which raises the need for further investigation of the appropriate treatment paradigm.

Obreja B, Bosma J, Venkadesh KV, Saghir Z, Prokop M, Jacobs C

pubmed logopapersAug 20 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To investigate the relationship between training data volume and performance of a deep learning AI algorithm developed to assess the malignancy risk of pulmonary nodules detected on low-dose CT scans in lung cancer screening. Materials and Methods This retrospective study used a dataset of 16077 annotated nodules (1249 malignant, 14828 benign) from the National Lung Screening Trial (NLST) to systematically train an AI algorithm for pulmonary nodule malignancy risk prediction across various stratified subsets ranging from 1.25% to the full dataset. External testing was conducted using data from the Danish Lung Cancer Screening Trial (DLCST) to determine the amount of training data at which the performance of the AI was statistically non-inferior to the AI trained on the full NLST cohort. A size-matched cancer-enriched subset of DLCST, where each malignant nodule had been paired in diameter with the closest two benign nodules, was used to investigate the amount of training data at which the performance of the AI algorithm was statistically non-inferior to the average performance of 11 clinicians. Results The external testing set included 599 participants (mean age 57.65 (SD 4.84) for females and mean age 59.03 (SD 4.94) for males) with 883 nodules (65 malignant, 818 benign). The AI achieved a mean AUC of 0.92 [95% CI: 0.88, 0.96] on the DLCST cohort when trained on the full NLST dataset. Training with 80% of NLST data resulted in non-inferior performance (mean AUC 0.92 [95%CI: 0.89, 0.96], <i>P</i> = .005). On the size-matched DLCST subset (59 malignant, 118 benign), the AI reached non-inferior clinician-level performance (mean AUC 0.82 [95% CI: 0.77, 0.86]) with 20% of the training data (<i>P</i> = .02). Conclusion The deep learning AI algorithm demonstrated excellent performance in assessing pulmonary nodule malignancy risk, achieving clinical level performance with a fraction of the training data and reaching peak performance before utilizing the full dataset. ©RSNA, 2025.

Rodriguez-Martinez A, Kothalawala D, Carrillo-Larco RM, Poulakakis-Daktylidis A

pubmed logopapersAug 20 2025
Precision medicine marks a transformative shift towards a patient-centric treatment approach, aiming to match 'the right patients with the right drugs at the right time'. The exponential growth of data from diverse omics modalities, electronic health records, and medical imaging has created unprecedented opportunities for precision medicine. This explosion of data requires advanced processing and analytical tools. At the forefront of this revolution is artificial intelligence (AI), which excels at uncovering hidden patterns within these high-dimensional and complex datasets. AI facilitates the integration and analysis of diverse data types, unlocking unparalleled potential to characterise complex diseases, improve prognosis, and predict treatment response. Despite the enormous potential of AI, challenges related to interpretability, reliability, generalisability, and ethical considerations emerge when translating these tools from research settings into clinical practice.

Lang FM, Liu J, Clerkin KJ, Driggin EA, Einstein AJ, Sayer GT, Takeda K, Uriel N, Summers RM, Topkara VK

pubmed logopapersAug 20 2025
Sarcopenia is associated with adverse outcomes in patients with end-stage heart failure. Muscle mass can be quantified via manual segmentation of computed tomography images, but this approach is time-consuming and subject to interobserver variability. We sought to determine whether fully automated assessment of radiographic sarcopenia by deep learning would predict heart transplantation outcomes. This retrospective study included 164 adult patients who underwent heart transplantation between January 2013 and December 2022. A deep learning-based tool was utilized to automatically calculate cross-sectional skeletal muscle area at the T11, T12, and L1 levels on chest computed tomography. Radiographic sarcopenia was defined as skeletal muscle index (skeletal muscle area divided by height squared) in the lowest sex-specific quartile. The study population had a mean age of 53±14 years and was predominantly male (75%) with a nonischemic cause (73%). Mean skeletal muscle index was 28.3±7.6 cm<sup>2</sup>/m<sup>2</sup> for females versus 33.1±8.1 cm<sup>2</sup>/m<sup>2</sup> for males (<i>P</i><0.001). Cardiac allograft survival was significantly lower in heart transplant recipients with versus without radiographic sarcopenia at T11 (90% versus 98% at 1 year, 83% versus 97% at 3 years, log-rank <i>P</i>=0.02). After multivariable adjustment, radiographic sarcopenia at T11 was associated with an increased risk of cardiac allograft loss or death (hazard ratio, 3.86 [95% CI, 1.35-11.0]; <i>P</i>=0.01). Patients with radiographic sarcopenia also had a significantly increased hospital length of stay (28 [interquartile range, 19-33] versus 20 [interquartile range, 16-31] days; <i>P</i>=0.046). Fully automated quantification of radiographic sarcopenia using pretransplant chest computed tomography successfully predicts cardiac allograft survival. By avoiding interobserver variability and accelerating computation, this approach has the potential to improve candidate selection and outcomes in heart transplantation.

Luo M, Yousefirizi F, Rouzrokh P, Jin W, Alberts I, Gowdy C, Bouchareb Y, Hamarneh G, Klyuzhin I, Rahmim A

pubmed logopapersAug 20 2025
Artificial intelligence (AI) is being explored for a growing range of applications in radiology, including image reconstruction, image segmentation, synthetic image generation, disease classification, worklist triage, and examination scheduling. However, training accurate AI models typically requires substantial amounts of expert-labeled data, which can be time-consuming and expensive to obtain. Active learning offers a potential strategy for mitigating the impacts of such labeling requirements. In contrast with other machine-learning approaches used for data-limited situations, active learning aims to produce labeled datasets by identifying the most informative or uncertain data for human annotation, thereby reducing labeling burden to improve model performance under constrained datasets. This Review explores the application of active learning to radiology AI, focusing on the role of active learning in reducing the resources needed to train radiology AI models while enhancing physician-AI interaction and collaboration. We discuss how active learning can be incorporated into radiology workflows to promote physician-in-the-loop AI systems, presenting key active learning concepts and use cases for radiology-based tasks, including through literature-based examples. Finally, we provide summary recommendations for the integration of active learning in radiology workflows while highlighting relevant opportunities, challenges, and future directions.

Zhong B, Fan R, Ma Y, Ji X, Cui Q, Cui C

pubmed logopapersAug 20 2025
The advancements in deep learning algorithms for medical image analysis have garnered significant attention in recent years. While several studies show promising results, with models achieving or even surpassing human performance, translating these advancements into clinical practice is still accompanied by various challenges. A primary obstacle lies in the availability of large-scale, well-characterized datasets for validating the generalization of approaches. To address this challenge, we curated a diverse collection of medical image datasets from multiple public sources, containing 105 datasets and a total of 1,995,671 images. These images span 14 modalities, including X-ray, computed tomography, magnetic resonance imaging, optical coherence tomography, ultrasound, and endoscopy, and originate from 13 organs, such as the lung, brain, eye, and heart. Subsequently, we constructed an online database, MedImg, which incorporates and systematically organizes these medical images to facilitate data accessibility. MedImg serves as an intuitive and open-access platform for facilitating research in deep learning-based medical image analysis, accessible at https://www.cuilab.cn/medimg/.

Yilihamu Y, Zhao K, Zhong H, Feng SQ

pubmed logopapersAug 20 2025
<b>Objective:</b> To investigate the application effectiveness of the artificial intelligence(AI) based Generative Pre-treatment tool of Skeletal Pathology (GPTSP) in measuring functional lumbar radiographic examinations. <b>Methods:</b> This is a retrospective case series study,reviewing the clinical and imaging data of 34 patients who underwent lumbar dynamic X-ray radiography at Department of Orthopedics, the Second Hospital of Shandong University from September 2021 to June 2023. Among the patients, 13 were male and 21 were female, with an age of (68.0±8.0) years (range:55 to 88 years). The AI model of the GPTSP system was built upon a multi-dimensional constrained loss function constructed based on the YOLOv8 model, incorporating Kullback-Leibler divergence to quantify the anatomical distribution deviation of lumbar intervertebral space detection boxes, along with the introduction of a global dynamic attention mechanism. It can identify lumbar vertebral body edge points and measure lumbar intervertebral space. Furthermore, spondylolisthesis index, lumbar index, and lumbar intervertebral angles were measured using three methods: manual measurement by doctors, predefined annotated measurement, and AI-assisted measurement. The consistency between the doctors and the AI model was analyzed through intra-class correlation coefficient (ICC) and Kappa coefficient. <b>Results:</b> AI-assisted physician measurement time was (1.5±0.1) seconds (range: 1.3 to 1.7 seconds), which was shorter than the manual measurement time ((2 064.4±108.2) seconds,range: 1 768.3 to 2 217.6 seconds) and the pre-defined annotation measurement time ((602.0±48.9) seconds,range: 503.9 to 694.4 seconds). Kappa values between physicians' diagnoses and AI model's diagnoses (based on GPTSP platform) for the lumbar slip index, lumbar index, and intervertebral angles measured by three methods were 0.95, 0.92, and 0.82 (all <i>P</i><0.01), with ICC values consistently exceeding 0.90, indicating high consistency. Based on the doctor's manual measurement, compared with the predefined label measurement, altering AI assistance, doctors measurement with average annotation errors reduced from 2.52 mm (range: 0.01 to 6.78 mm) to 1.47 mm(range: 0 to 5.03 mm). <b>Conclusions:</b> The GPTSP system enhanced efficiency in functional lumbar analysis. AI model demonstrated high consistency in annotation and measurement results, showing strong potential to serve as a reliable clinical auxiliary tool.

Wang BZ, Zhang X, Wang YL, Wang XY, Wang QG, Luo Z, Xu SL, Huang C

pubmed logopapersAug 20 2025
<b>Objective:</b> To develop a preoperative differentiation model for colorectal mucinous adenocarcinoma and non-mucinous adenocarcinoma using a combination of contrast-enhanced CT radiomics and deep learning methods. <b>Methods:</b> This is a retrospective case series study. Clinical data of colorectal cancer patients confirmed by postoperative pathological examination were retrospectively collected from January 2016 to December 2023 at Shanghai General Hospital Affiliated to Shanghai Jiao Tong University School of Medicine (Center 1, <i>n</i>=220) and the First Affiliated Hospital of Bengbu Medical University (Center 2, <i>n=</i>51). Among them, there were 108 patients diagnosed with mucinous adenocarcinoma, including 55 males and 53 females, with an age of (68.4±12.2) years (range: 38 to 96 years); and 163 patients diagnosed with non-mucinous adenocarcinoma, including 96 males and 67 females, with an age of (67.9±11.0) years (range: 43 to 94 years). The cases from Center 1 were divided into a training set (<i>n</i>=156) and an internal validation set (<i>n</i>=64) using stratified random sampling in a 7︰3 ratio, and the cases from Center 2 were used as an independent external validation set (<i>n</i>=51). Three-dimensional tumor volume of interest was manually segmented on venous-phase contrast-enhanced CT images. Radiomics features were extracted using PyRadiomics, and deep learning features were extracted using the ResNet-18 network. The two sets of features were then combined to form a joint feature set. The consistency of manual segmentation was assessed using the intraclass correlation coefficient. Feature dimensionality reduction was performed using the Mann-Whitney <i>U</i> test and the least absolute shrinkage and selection operator regression. Six machine learning algorithms were used to construct models based on radiomics features, deep learning features, and combined features, including support vector machine, Logistic regression, random forest, extreme gradient boosting, k-nearest neighbors, and decision tree. The discriminative performance of each model was evaluated using receiver operating characteristic curves, the area under the curve (AUC), DeLong test, and decision curve analysis. <b>Results:</b> After feature selection, 22 features with the most discriminative value were finally retained, among which 12 were traditional radiomics features and 10 were deep learning features. In the internal validation set, the Random Forest algorithm based on the combined features model achieved the best performance (AUC=0.938, 95%<i>CI:</i> 0.875 to 0.984), which was superior to the single-modality radiomics feature model (AUC=0.817, 95%<i>CI:</i> 0.702 to 0.913,<i>P</i>=0.048) and the deep learning feature model (AUC=0.832, 95%<i>CI:</i> 0.727 to 0.926,<i>P=</i>0.087); in the independent external validation set, the Random Forest algorithm with the combined features model maintained the highest discriminative performance (AUC=0.891, 95%<i>CI:</i> 0.791 to 0.969), which was superior to the single-modality radiomics feature model (AUC=0.770, 95%<i>CI:</i> 0.636 to 0.890,<i>P</i>=0.045) and the deep learning feature model (AUC=0.799, 95%<i>CI:</i> 0.652 to 0.911,<i>P</i>=0.169). <b>Conclusion:</b> The combined model based on radiomics and deep learning features from venous-phase enhanced CT demonstrates good performance in the preoperative differentiation of colorectal mucinous from non-mucinous adenocarcinoma.

Dong Q, Wang JM, Xiu WL

pubmed logopapersAug 20 2025
The complex anatomical structure of abdominal organs demands high precision in surgical procedures, which also increases postoperative complication risks. Advancements in digital medicine have created new opportunities for precision surgery. This article summarizes the current applications of digital intelligence in precision abdominal surgery. The processing and real-time monitoring technologies of medical imaging provide powerful tools for accurate diagnosis and treatment. Meanwhile, big data analysis and precise classification capabilities of artificial intelligence further enhance diagnostic efficiency and safety. Additionally, the paper analyzes the advantages and limitations of digital intelligence in empowering precision abdominal surgery, while exploring future development directions.
Page 244 of 6576562 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.