Sort by:
Page 90 of 6386373 results

Kabil A, Khoriba G, Yousef M, Rashed EA

pubmed logopapersOct 6 2025
Medical Image Segmentation (MIS) stands as a cornerstone in medical image analysis, playing a pivotal role in precise diagnostics, treatment planning, and monitoring of various medical conditions. This paper presents a comprehensive and systematic survey of MIS methodologies, bridging the gap between traditional image processing techniques and modern deep learning approaches. The survey encompasses thresholding, edge detection, region-based segmentation, clustering algorithms, and model-based techniques while also delving into state-of-the-art deep learning architectures such as Convolutional Neural Networks (CNNs), Fully Convolutional Networks (FCNs), and the widely adopted U-Net and its variants. Moreover, integrating attention mechanisms, semi-supervised learning, generative adversarial networks (GANs), and Transformer-based models is thoroughly explored. In addition to covering established methods, this survey highlights emerging trends, including hybrid architectures, cross-modality learning, federated and distributed learning frameworks, and active learning strategies, which aim to address challenges such as limited labeled datasets, computational complexity, and model generalizability across diverse imaging modalities. Furthermore, a specialized case study on lumbar spine segmentation is presented, offering insights into the challenges and advancements in this relatively underexplored anatomical region. Despite significant progress in the field, critical challenges persist, including dataset bias, domain adaptation, interpretability of deep learning models, and integration into real-world clinical workflows. This survey serves as both a tutorial and a reference guide, particularly for early-career researchers, by providing a holistic understanding of the landscape of MIS and identifying promising directions for future research. Through this work, we aim to contribute to the development of more robust, efficient, and clinically applicable medical image segmentation systems.

Muñoz M, Han X, Camacho J, Perrone T, Smargiassi A, Inchingolo R, Tung-Chen Y, Demi L

pubmed logopapersOct 6 2025
Lung ultrasound (LUS) interpretation is often subjective and operator-dependent, motivating the development of automated, artificial intelligence (AI)-based methods. This international, multi-center study evaluated two distinct deep learning approaches for automated LUS severity scoring for pulmonary infections caused by COVID-19: a pre-trained classification model (CM) and a segmentation model based method (SM); assessing performance at video, exam, and prognostic levels. Two datasets were analyzed: one comprising data from multiple scanners and another using data from a single scanner. Results showed that the SM achieved prognostic-level agreement with expert clinicians comparable to that of the CM. Furthermore, at the exam level, over 84% of examinations were classified with acceptable error (≤ 10 score difference) across both models and datasets, reaching both methods an agreement higher than 95% on the dataset acquired by a single scanner. The obtained results demonstrate the potential of AI-assisted LUS for reliable prognostic assessment and highlight that image quality and acquisition technique are key factors in achieving consistent and generalizable model performance, as well as the potential for international clinical translations.

Kostiuk V, Rodriguez PP, Loh SA, Wilson E, Mojibian H, Fischer U, Ochoa Chaar CI, Aboian E

pubmed logopapersOct 6 2025
Timely detection and monitoring of abdominal aortic aneurysms (AAA) are necessary to prevent ruptures and decrease mortality. Artificial intelligence (AI)-based algorithms can automatically detect the presence of AAA on imaging and radiology reports. The goal of this study is to examine the impact of AI utilization on AAA detection and care while comparing it to historical standard of care. AI-based AAA detection and measurement algorithm was deployed in the healthcare system. The software can be used as a phone application and a desktop analytical tool. The team (vascular surgeons, radiologists, and nurses) gets notifications when AAA ≥5cm is detected on any CT imaging in the network. It also generates monthly lists of all patients with AAA for the team to review. A workflow to ensure timely referral and evaluation was established. All CT reports prior to the software deployment were analyzed for the AAA presence using natural language processing of radiology reports. Patients with imaging for known AAA monitoring and AAA screening were excluded. Patients were divided into two groups: "pre-AI" and "post-AI" (prior to and post implementation of AI-driven protocol, respectively). The study compared patient and imaging characteristics, initial evaluation and long-term follow-up, and the timeline between AI-detected scans and AAA repairs. A subgroup analysis to assess the time to evaluation for AAA measuring ≥ 4 cm was performed. The primary outcome was initial evaluation after incidental detection of AAA. Patient and imaging characteristics were similar in both groups. A greater proportion of patients underwent initial AAA evaluation after implementation of AI-assisted AAA care (42% vs 18%, p<0.001). There was a trend for a shorter evaluation timeline for patients in the post-AI protocol group (22 days vs 83 days, p=0.1). Most patients in both groups were seen by vascular surgeons for the initial AAA evaluation and during long-term follow-up. Similar proportions of patients in both groups were treated with statin, aspirin and antiplatelet medical therapy at the time of initial evaluation. A greater proportion of patients in the post-AI protocol group had long-term follow-up (45% vs 30%, p=0.004) and had scheduled appointments for long-term AAA monitoring (99% vs 65%, p<0.001). The implementation of the AI-assisted AAA detection and care protocol significantly increased proportion of patients receiving initial AAA evaluation and long-term follow-up care. It also correlated with a decreased timeline to initial evaluation, and for AAA measuring ≥5cm, it shortened the time from detection to repair.

Chen Y, Liu N, Lin R, Huang D, Pan L, Chen X, Tang M, Zhan L, Huang Y, Chen J, Huang P, Tang L

pubmed logopapersOct 6 2025
To develop a machine learning model integrating habitat-based radiomics and voting algorithms for predicting axillary lymph node metastasis (ALNM) in breast cancer using B-mode and contrast-enhanced ultrasound images. This retrospective study included 246 T1/T2 stage breast cancer patients (246 lesions) from Fujian Cancer Hospital (2016.04-2022.12). Lesions were randomly divided into training (n = 197) and testing (n = 49) datasets. A Gaussian Mixture Model partitioned B-mode ultrasound images into three subregions. Radiomics features, including shape features, first-order features, and texture features, were extracted from whole-tumor (B-mode and CEUS) and subregional (B-mode) ROIs. Multiple classifiers were applied to evaluate the model's diagnostic performance. Voting algorithms were used to integrate habitat-based radiomics, traditional radiomics, and clinical information for model optimization. Diagnostic performance was assessed via accuracy, sensitivity, specificity, and F1-score. Feature extraction yielded 899 features per ROI, including tumor subregions, enhancing prediction robustness. On the testing set, the combined habitat-based model (Habitat-CEUS-Clinical model) achieved an accuracy of 87.76% (95% CI: 0.775, 0.944) and a false positive rate of 7.41% (95% CI: 0.019, 0.202) using hard voting, outperforming single models by 12.25% in terms of accuracy. In comparison, the traditional approach (Whole-CEUS-Clinical model) reached an accuracy of 79.59% (95% CI: 0.678, 0.884). The difference between the Habitat-CEUS-Clinical model and the Whole-CEUS-Clinical model was statistically significant (p < 0.05). Habitat-based radiomics captures tumor heterogeneity more effectively than conventional methods. Dual-modality ultrasound combined with voting algorithms significantly improves ALNM prediction, providing a reliable foundation for computer-aided preoperative planning in breast cancer.

Zhang X, Yang X, Xing Z, Lv S, Qian D, Guo X, Wang J, Lin Y, Cao D

pubmed logopapersOct 6 2025
Mechanical thrombectomy (MT) has been recognized as a groundbreaking intervention for acute ischemic stroke (AIS) resulting from large vessel occlusion (LVO). Traditional imaging parameters frequently fall short in synthetically encapsulating the heterogeneity of thrombi and the dynamics of collateral circulation. This study seeks to investigate the integration of venous-phase clot radiomics features with arterial-level collateral scores obtained from color-coded multi-phase CT angiography (mCTA) to predict neurological improvement (NI) following MT in LVO-AIS patients. A retrospective analysis was conducted on a series of adult patients with LVO-AIS who underwent mCTA followed by MT. Radiomic features were extracted from the peak-venous and late-venous phases of the mCTA. Subsequently, a machine learning algorithm was employed to develop radiomic models. The regional leptomeningeal collateral (rLMC) score, derived from color-coded mCTA maps, was documented to assess arterial-level collateral status. Another fusion model integrating clinical, collateral, and radiomics data was constructed using logistic regression to predict NI status. The study included 110 AIS patients in which the rLMC score was significantly higher in the NI group compared to the non-NI group (P<0.001). The clot-based radiomics model exhibited good predictive performance, with AUC values of 0.986 (training set) and 0.831 (test set) for the peak-venous phase. The fusion model based on peak venous phase data, incorporating clinical parameters, rLMC score, and radiomics features, showed superior predictive accuracy (AUC: 0.992 in training set, 0.889 in test set). Corresponding DCA indicate that the combined model demonstrates the optimal potential clinical benefits. The integration of venous-phase clot radiomics features with arterial-level collateral scores and clinical parameters effectively predicts NI after MT in AIS patients.

Shah C, Ghodasara S, Chen D, Chen PH

pubmed logopapersOct 6 2025
The adoption of artificial intelligence (AI) into clinical practice in radiology can be facilitated by following a structured pipeline for implementation. In this paper, we propose a practical framework for the responsible implementation of AI through four phases: validation, deployment, value assessment, and post-deployment surveillance. Validation involves retrospective or offline testing on institutional data to assess the model's local performance. Deployment progresses through limited trial and full deployment stages, with an emphasis on workflow considerations, integrations, operational metrics, and stakeholder feedback. Value assessment is longitudinal throughout these phases and encompasses both financial and non-financial returns on investment (ROI). Finally, ongoing surveillance can detect data drift, monitor clinical performance, and maintain AI safety. The framework proposed herein provides a governance-oriented approach to AI implementation, addressing the core questions: Does it work? Does it help? Does it stay?

Bernadette Hahn, Gael Rigaud, Richard Schmähl

arxiv logopreprintOct 5 2025
Limited-angle computerized tomography stands for one of the most difficult challenges in imaging. Although it opens the way to faster data acquisition in industry and less dangerous scans in medicine, standard approaches, such as the filtered backprojection (FBP) algorithm or the widely used total-variation functional, often produce various artefacts that hinder the diagnosis. With the rise of deep learning, many modern techniques have proven themselves successful in removing such artefacts but at the cost of large datasets. In this paper, we propose a new model-driven approach based on the method of the approximate inverse, which could serve as new starting point for learning strategies in the future. In contrast to FBP-type approaches, our reconstruction step consists in evaluating linear functionals on the measured data using reconstruction kernels that are precomputed as solution of an auxiliary problem. With this problem being uniquely solvable, the derived limited-angle reconstruction kernel (LARK) is able to fully reconstruct the object without the well-known streak artefacts, even for large limited angles. However, it inherits severe ill-conditioning which leads to a different kind of artefacts arising from the singular functions of the limited-angle Radon transform. The problem becomes particularly challenging when working on semi-discrete (real or analytical) measurements. We develop a general regularization strategy, named constrained limited-angle reconstruction kernel (CLARK), by combining spectral filter, the method of the approximate inverse and custom edge-preserving denoising in order to stabilize the whole process. We further derive and interpret error estimates for the application on real, i.e. semi-discrete, data and we validate our approach on synthetic and real data.

Kushal Vyas, Ashok Veeraraghavan, Guha Balakrishnan

arxiv logopreprintOct 5 2025
Implicit neural representations (INRs) have achieved remarkable successes in learning expressive yet compact signal representations. However, they are not naturally amenable to predictive tasks such as segmentation, where they must learn semantic structures over a distribution of signals. In this study, we introduce MetaSeg, a meta-learning framework to train INRs for medical image segmentation. MetaSeg uses an underlying INR that simultaneously predicts per pixel intensity values and class labels. It then uses a meta-learning procedure to find optimal initial parameters for this INR over a training dataset of images and segmentation maps, such that the INR can simply be fine-tuned to fit pixels of an unseen test image, and automatically decode its class labels. We evaluated MetaSeg on 2D and 3D brain MRI segmentation tasks and report Dice scores comparable to commonly used U-Net models, but with $90\%$ fewer parameters. MetaSeg offers a fresh, scalable alternative to traditional resource-heavy architectures such as U-Nets and vision transformers for medical image segmentation. Our project is available at https://kushalvyas.github.io/metaseg.html .

Clackett W, Zealley IA, Yang Z, Salahia G, White RD

pubmed logopapersOct 5 2025
This study aimed to evaluate the feasibility of using a large language model (LLM) to generate patient information leaflets (PILs) with improved readability based on PILs in the field of interventional radiology. PILs were acquired from the Cardiovascular and Interventional Radiology Society of Europe website, reformatted, and uploaded to the GPT-4 user interface with a prompt aimed to simplify the language. Automated readability metrics were used to evaluate the readability of original and LLM-modified PILs. Factual accuracy was assessed by human evaluation from three consultant interventional radiologists using an agreed marking scheme. LLM-modified PILs had significantly lower mean reading grade (9.5±0.5) compared with original PILs (11.1±0.1) (p<0.01). However, the recommended reading grade of 6 (expected to be understood by 11- to 12-year-old children) was not achieved. Human evaluation revealed that most LLM-modified PILs had minor concerns regarding factual accuracy, but no errors that could result in serious patient harm were detected. LLMs appear to be a powerful tool in improving the readability of PILs within the field of interventional radiology. However, clinical experts are still required in PIL development to ensure the factual accuracy of these augmented documents is not compromised. LLMs should be considered as a useful tool to assist with the development and revision of PILs in the field of interventional radiology.

Escartín J, López-Úbeda P, Martín-Noguerol T, Luna A

pubmed logopapersOct 5 2025
Ischemic stroke, a leading cause of global disability and mortality, demands precise etiological classification for effective management. The variability in the use of existing stroke classification systems, along with the challenges in manual etiological labeling from brain MRI radiological reports, calls for an innovative approach. This study aims to develop and evaluate a Natural Language Processing (NLP) algorithm using transformer-based models for the extraction and classification of ischemic stroke types from MRI reports, enhancing diagnostic efficiency and stroke management. We built a dataset comprising 635 brain MRI reports, annotated for four distinct ischemic stroke types. All were clinically consistent with focal neurologic impairment due to stroke. The study involved evaluating two pre-trained models BERT (Bert clinical and Beto) and two models RoBERTa (Roberta clinical trials and Roberta biomedical), focusing on their ability to accurately classify stroke subtypes. The Roberta biomedical model emerged as the most effective, demonstrating superior performance with an accuracy of 76.7 % with statistically significant results. This model also achieved the highest precision, recall, and F1 scores across all stroke types, indicating its robustness in stroke subtype classification. The study highlights the potential of NLP algorithms in automating stroke classification from MRI reports, which could significantly aid in diagnostic processes and streamline stroke management strategies.
Page 90 of 6386373 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.