Sort by:
Page 87 of 2252246 results

Role of Large Language Models for Suggesting Nerve Involvement in Upper Limbs MRI Reports with Muscle Denervation Signs.

Martín-Noguerol T, López-Úbeda P, Luna A, Gómez-Río M, Górriz JM

pubmed logopapersJun 5 2025
Determining the involvement of specific peripheral nerves (PNs) in the upper limb associated with signs of muscle denervation can be challenging. This study aims to develop, compare, and validate various large language models (LLMs) to automatically identify and establish potential relationships between denervated muscles and their corresponding PNs. We collected 300 retrospective MRI reports in Spanish from upper limb examinations conducted between 2018 and 2024 that showed signs of muscle denervation. An expert radiologist manually annotated these reports based on the affected peripheral nerves (median, ulnar, radial, axillary, and suprascapular). BERT, DistilBERT, mBART, RoBERTa, and Medical-ELECTRA models were fine-tuned and evaluated on the reports. Additionally, an automatic voting system was implemented to consolidate predictions through majority voting. The voting system achieved the highest F1 scores for the median, ulnar, and radial nerves, with scores of 0.88, 1.00, and 0.90, respectively. Medical-ELECTRA also performed well, achieving F1 scores above 0.82 for the axillary and suprascapular nerves. In contrast, mBART demonstrated lower performance, particularly with an F1 score of 0.38 for the median nerve. Our voting system generally outperforms the individually tested LLMs in determining the specific PN likely associated with muscle denervation patterns detected in upper limb MRI reports. This system can thereby assist radiologists by suggesting the implicated PN when generating their radiology reports.

Ensemble of weak spectral total-variation learners: a PET-CT case study.

Rosenberg A, Kennedy J, Keidar Z, Zeevi YY, Gilboa G

pubmed logopapersJun 5 2025
Solving computer vision problems through machine learning, one often encounters lack of sufficient training data. To mitigate this, we propose the use of ensembles of weak learners based on spectral total-variation (STV) features (Gilboa G. 2014 A total variation spectral framework for scale and texture analysis. <i>SIAM J. Imaging Sci</i>. <b>7</b>, 1937-1961. (doi:10.1137/130930704)). The features are related to nonlinear eigenfunctions of the total-variation subgradient and can characterize well textures at various scales. It was shown (Burger M, Gilboa G, Moeller M, Eckardt L, Cremers D. 2016 Spectral decompositions using one-homogeneous functionals. <i>SIAM J. Imaging Sci</i>. <b>9</b>, 1374-1408. (doi:10.1137/15m1054687)) that, in the one-dimensional case, orthogonal features are generated, whereas in two dimensions the features are empirically lowly correlated. Ensemble learning theory advocates the use of lowly correlated weak learners. We thus propose here to design ensembles using learners based on STV features. To show the effectiveness of this paradigm, we examine a hard real-world medical imaging problem: the predictive value of computed tomography (CT) data for high uptake in positron emission tomography (PET) for patients suspected of skeletal metastases. The database consists of 457 scans with 1524 unique pairs of registered CT and PET slices. Our approach is compared with deep-learning methods and to radiomics features, showing STV learners perform best (AUC=[Formula: see text]), compared with neural nets (AUC=[Formula: see text]) and radiomics (AUC=[Formula: see text]). We observe that fine STV scales in CT images are especially indicative of the presence of high uptake in PET.This article is part of the theme issue 'Partial differential equations in data science'.

Are presentations of thoracic CT performed on admission to the ICU associated with mortality at day-90 in COVID-19 related ARDS?

Le Corre A, Maamar A, Lederlin M, Terzi N, Tadié JM, Gacouin A

pubmed logopapersJun 5 2025
Computed tomography (CT) analysis of lung morphology has significantly advanced our understanding of acute respiratory distress syndrome (ARDS). During the Coronavirus Disease 2019 (COVID-19) pandemic, CT imaging was widely utilized to evaluate lung injury and was suggested as a tool for predicting patient outcomes. However, data specifically focused on patients with ARDS admitted to intensive care units (ICUs) remain limited. This retrospective study analyzed patients admitted to ICUs between March 2020 and November 2022 with moderate to severe COVID-19 ARDS. All CT scans performed within 48 h of ICU admission were independently reviewed by three experts. Lung injury severity was quantified using the CT Severity Score (CT-SS; range 0-25). Patients were categorized as having severe disease (CT-SS ≥ 18) or non-severe disease (CT-SS < 18). The primary outcome was all-cause mortality at 90 days. Secondary outcomes included ICU mortality and medical complications during the ICU stay. Additionally, we evaluated a computer-assisted CT-score assessment using artificial intelligence software (CT Pneumonia Analysis<sup>®</sup>, SIEMENS Healthcare) to explore the feasibility of automated measurement and routine implementation. A total of 215 patients with moderate to severe COVID-19 ARDS were included. The median CT-SS at admission was 18/25 [interquartile range, 15-21]. Among them, 120 patients (56%) had a severe CT-SS (≥ 18), while 95 patients (44%) had a non-severe CT-SS (< 18). The 90-day mortality rates were 20.8% for the severe group and 15.8% for the non-severe group (p = 0.35). No significant association was observed between CT-SS severity and patient outcomes. In patients with moderate to severe COVID-19 ARDS, systematic CT assessment of lung parenchymal injury was not a reliable predictor of 90-day mortality or ICU-related complications.

Stable Vision Concept Transformers for Medical Diagnosis

Lijie Hu, Songning Lai, Yuan Hua, Shu Yang, Jingfeng Zhang, Di Wang

arxiv logopreprintJun 5 2025
Transparency is a paramount concern in the medical field, prompting researchers to delve into the realm of explainable AI (XAI). Among these XAI methods, Concept Bottleneck Models (CBMs) aim to restrict the model's latent space to human-understandable high-level concepts by generating a conceptual layer for extracting conceptual features, which has drawn much attention recently. However, existing methods rely solely on concept features to determine the model's predictions, which overlook the intrinsic feature embeddings within medical images. To address this utility gap between the original models and concept-based models, we propose Vision Concept Transformer (VCT). Furthermore, despite their benefits, CBMs have been found to negatively impact model performance and fail to provide stable explanations when faced with input perturbations, which limits their application in the medical field. To address this faithfulness issue, this paper further proposes the Stable Vision Concept Transformer (SVCT) based on VCT, which leverages the vision transformer (ViT) as its backbone and incorporates a conceptual layer. SVCT employs conceptual features to enhance decision-making capabilities by fusing them with image features and ensures model faithfulness through the integration of Denoised Diffusion Smoothing. Comprehensive experiments on four medical datasets demonstrate that our VCT and SVCT maintain accuracy while remaining interpretable compared to baselines. Furthermore, even when subjected to perturbations, our SVCT model consistently provides faithful explanations, thus meeting the needs of the medical field.

SAM-aware Test-time Adaptation for Universal Medical Image Segmentation

Jianghao Wu, Yicheng Wu, Yutong Xie, Wenjia Bai, You Zhang, Feilong Tang, Yulong Li, Yasmeen George, Imran Razzak

arxiv logopreprintJun 5 2025
Universal medical image segmentation using the Segment Anything Model (SAM) remains challenging due to its limited adaptability to medical domains. Existing adaptations, such as MedSAM, enhance SAM's performance in medical imaging but at the cost of reduced generalization to unseen data. Therefore, in this paper, we propose SAM-aware Test-Time Adaptation (SAM-TTA), a fundamentally different pipeline that preserves the generalization of SAM while improving its segmentation performance in medical imaging via a test-time framework. SAM-TTA tackles two key challenges: (1) input-level discrepancies caused by differences in image acquisition between natural and medical images and (2) semantic-level discrepancies due to fundamental differences in object definition between natural and medical domains (e.g., clear boundaries vs. ambiguous structures). Specifically, our SAM-TTA framework comprises (1) Self-adaptive Bezier Curve-based Transformation (SBCT), which adaptively converts single-channel medical images into three-channel SAM-compatible inputs while maintaining structural integrity, to mitigate the input gap between medical and natural images, and (2) Dual-scale Uncertainty-driven Mean Teacher adaptation (DUMT), which employs consistency learning to align SAM's internal representations to medical semantics, enabling efficient adaptation without auxiliary supervision or expensive retraining. Extensive experiments on five public datasets demonstrate that our SAM-TTA outperforms existing TTA approaches and even surpasses fully fine-tuned models such as MedSAM in certain scenarios, establishing a new paradigm for universal medical image segmentation. Code can be found at https://github.com/JianghaoWu/SAM-TTA.

Preoperative Prognosis Prediction for Pathological Stage IA Lung Adenocarcinoma: 3D-Based Consolidation Tumor Ratio is Superior to 2D-Based Consolidation Tumor Ratio.

Zhao L, Dong H, Chen Y, Wu F, Han C, Kuang P, Guan X, Xu X

pubmed logopapersJun 5 2025
The two-dimensional computed tomography measurement of the consolidation tumor ratio (2D-CTR) has limitations in the prognostic evaluation of early-stage lung adenocarcinoma: the measurement is subject to inter-observer variability and lacks spatial information, which undermines its reliability as a prognostic tool. This study aims to investigate the value of the three-dimensional volume-based CTR (3D-CTR) in preoperative prognosis prediction for pathological Stage IA lung adenocarcinoma, and compare its predictive performance with that of 2D-CTR. A retrospective cohort of 980 patients with pathological Stage IA lung adenocarcinoma who underwent surgery was included. Preoperative thin-section CT images were processed using artificial intelligence (AI) software for 3D segmentation. Tumor solid component volume was quantified using different density thresholds (-300 to -150 HU, in 50 HU intervals), and 3D-CTR was calculated. The optimal threshold associated with prognosis was selected using multivariate Cox regression. The predictive performance of 3D-CTR and 2D-CTR for recurrence-free survival (RFS) post-surgery was compared using receiver operating characteristic (ROC) curves, and the best cutoff value was determined. The integrated discrimination improvement (IDI) was utilized to assess the enhancement in predictive efficacy of 3D-CTR relative to 2D-CTR. Among traditional preoperative factors, 2D-CTR (cutoff value 0.54, HR=1.044, P=0.001) and carcinoembryonic antigen (CEA) were identified as independent prognostic factors for RFS. In 3D analysis, -150 HU was determined as the optimal threshold for distinguishing solid components from ground-glass opacity (GGO) components. The corresponding 3D-CTR (cutoff value 0.41, HR=1.033, P<0.001) was an independent risk factor for RFS. The predictive performance of 3D-CTR was significantly superior to that of 2D-CTR (AUC: 0.867 vs. 0.840, P=0.006), with a substantial enhancement in predictive capacity, as evidenced by an IDI of 0.038 (95% CI: 0.021-0.055, P<0.001). Kaplan-Meier analysis revealed that the 5-year RFS rate for the 3D-CTR >0.41 group was significantly lower than that of the ≤0.41 group (68.5% vs. 96.7%, P<0.001). The 3D-CTR based on a -150 HU density threshold provides a more accurate prediction of postoperative recurrence risk in pathological Stage IA lung adenocarcinoma, demonstrating superior performance compared to traditional 2D-CTR.

Comparative analysis of semantic-segmentation models for screen film mammograms.

Rani J, Singh J, Virmani J

pubmed logopapersJun 5 2025
Accurate segmentation of mammographic mass is very important as shape characteristics of these masses play a significant role for radiologist to diagnose benign and malignant cases. Recently, various deep learning segmentation algorithms have become popular for segmentation tasks. In the present work, rigorous performance analysis of ten semantic-segmentation models has been performed with 518 images taken from DDSM dataset (digital database for screening mammography) with 208 mass images ϵ BIRAD3, 150 mass images ϵ BIRAD4 and 160 mass images ϵ BIRAD5 classes, respectively. These models are (1) simple convolution series models namely- VGG16/VGG19, (2) simple convolution DAG (directed acyclic graph) models namely- U-Net (3) dilated convolution DAG models namely ResNet18/ResNet50/ShuffleNet/XceptionNet/InceptionV2/MobileNetV2 and (4) hybrid model, i.e. hybrid U-Net. On the basis of exhaustive experimentation, it was observed that dilated convolution DAG models namely- ResNet50, ShuffleNet and MobileNetV2 outperform other network models yielding cumulative JI and F1 score values of 0.87 and 0.92, 0.85 and 91, 0.84 and 0.90, respectively. The segmented images obtained by best performing models were subjectively analyzed by participating radiologist in terms of (a) size (b) margins and (c) shape characteristics. From objective and subjective analysis it was concluded that ResNet50 is the optimal model for segmentation of difficult to delineate breast masses with dense background and masses where both masses and micro-calcifications are simultaneously present. The result of the study indicates that ResNet50 model can be used in routine clinical environment for segmentation of mammographic masses.

A Machine Learning Method to Determine Candidates for Total and Unicompartmental Knee Arthroplasty Based on a Voting Mechanism.

Zhang N, Zhang L, Xiao L, Li Z, Hao Z

pubmed logopapersJun 5 2025
Knee osteoarthritis (KOA) is a prevalent condition. Accurate selection between total knee arthroplasty (TKA) and unicompartmental knee arthroplasty (UKA) is crucial for optimal treatment in patients who have end-stage KOA, particularly for improving clinical outcomes and reducing healthcare costs. This study proposes a machine learning model based on a voting mechanism to enhance the accuracy of surgical decision-making for KOA patients. Radiographic data were collected from a high-volume joint arthroplasty practice, focusing on anterior-posterior, lateral, and skyline X-ray views. The dataset included 277 TKA and 293 UKA cases, each labeled through intraoperative observations (indicating whether TKA or UKA was the appropriate choice). A five-fold cross-validation approach was used for training and validation. In the proposed method, three base models were first trained independently on single-view images, and a voting mechanism was implemented to aggregate model outputs. The performance of the proposed method was evaluated by using metrics such as accuracy and the area under the receiver operating characteristic curve (AUC). The proposed method achieved an accuracy of 94.2% and an AUC of 0.98%, demonstrating superior performance compared to existing models. The voting mechanism enabled base models to effectively utilize the detailed features from all three X-ray views, leading to enhanced predictive accuracy and model interpretability. This study provides a high-accuracy method for surgical decision-making between TKA and UKA for KOA patients, requiring only standard X-rays and offering potential for clinical application in automated referrals and preoperative planning.

Dual energy CT-based Radiomics for identification of myocardial focal scar and artificial beam-hardening.

Zeng L, Hu F, Qin P, Jia T, Lu L, Yang Z, Zhou X, Qiu Y, Luo L, Chen B, Jin L, Tang W, Wang Y, Zhou F, Liu T, Wang A, Zhou Z, Guo X, Zheng Z, Fan X, Xu J, Xiao L, Liu Q, Guan W, Chen F, Wang J, Li S, Chen J, Pan C

pubmed logopapersJun 5 2025
Computed tomography is an inadequate method for detecting myocardial focal scar (MFS) due to its moderate density resolution, which is insufficient for distinguishing MFS from artificial beam-hardening (BH). Virtual monochromatic images (VMIs) of dual-energy coronary computed tomography angiography (DECCTA) provide a variety of diagnostic information with significant potential for detecting myocardial lesions. The aim of this study was to assess whether radiomics analysis in VMIs of DECCTA can help distinguish MFS from BH. A prospective cohort of patients who were suspected with an old myocardial infarction was assembled at two different centers between Janurary 2021 and June 2024. MFS and BH segmentation and radiomics feature extraction and selection were performed on VMIs images, and four machine learning classifiers were constructed using selected strongest features. Subsequently, an independent validation was conducted, and a subjective diagnosis of the validation set was provided by an radiologist. The AUC was used to assess the performance of the radiomics models. The training set included 57 patients from center 1 (mean age, 54 years +/- 9, 55 men), and the external validation set included 10 patients from center 2 (mean age, 59 years +/- 10, 9 men). The radiomics models exhibited the highest AUC value of 0.937 (expressed at 130 keV VMIs), while the radiologist demonstrated the highest AUC value of 0.734 (expressed at 40 keV VMIs). The integration of radiomic features derived from VMIs of DECCTA with machine learning algorithms has the potential to improve the efficiency of distinguishing MFS from BH.

Exploring Adversarial Watermarking in Transformer-Based Models: Transferability and Robustness Against Defense Mechanism for Medical Images

Rifat Sadik, Tanvir Rahman, Arpan Bhattacharjee, Bikash Chandra Halder, Ismail Hossain

arxiv logopreprintJun 5 2025
Deep learning models have shown remarkable success in dermatological image analysis, offering potential for automated skin disease diagnosis. Previously, convolutional neural network(CNN) based architectures have achieved immense popularity and success in computer vision (CV) based task like skin image recognition, generation and video analysis. But with the emergence of transformer based models, CV tasks are now are nowadays carrying out using these models. Vision Transformers (ViTs) is such a transformer-based models that have shown success in computer vision. It uses self-attention mechanisms to achieve state-of-the-art performance across various tasks. However, their reliance on global attention mechanisms makes them susceptible to adversarial perturbations. This paper aims to investigate the susceptibility of ViTs for medical images to adversarial watermarking-a method that adds so-called imperceptible perturbations in order to fool models. By generating adversarial watermarks through Projected Gradient Descent (PGD), we examine the transferability of such attacks to CNNs and analyze the performance defense mechanism -- adversarial training. Results indicate that while performance is not compromised for clean images, ViTs certainly become much more vulnerable to adversarial attacks: an accuracy drop of as low as 27.6%. Nevertheless, adversarial training raises it up to 90.0%.
Page 87 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.