Sort by:
Page 109 of 2212205 results

Automated Brain Tumor Classification and Grading Using Multi-scale Graph Neural Network with Spatio-Temporal Transformer Attention Through MRI Scans.

Srivastava S, Jain P, Pandey SK, Dubey G, Das NN

pubmed logopapersJun 5 2025
The medical field uses Magnetic Resonance Imaging (MRI) as an essential diagnostic tool which provides doctors non-invasive images of brain structures and pathological conditions. Brain tumor detection stands as a vital application that needs specific and effective approaches for both medical diagnosis and treatment procedures. The challenges from manual examination of MRI scans stem from inconsistent tumor features including heterogeneity and irregular dimensions which results in inaccurate assessments of tumor size. To address these challenges, this paper proposes an Automated Classification and Grading Diagnosis Model (ACGDM) using MRI images. Unlike conventional methods, ACGDM introduces a Multi-Scale Graph Neural Network (MSGNN), which dynamically captures hierarchical and multi-scale dependencies in MRI data, enabling more accurate feature representation and contextual analysis. Additionally, the Spatio-Temporal Transformer Attention Mechanism (STTAM) effectively models both spatial MRI patterns and temporal evolution by incorporating cross-frame dependencies, enhancing the model's sensitivity to subtle disease progression. By analyzing multi-modal MRI sequences, ACGDM dynamically adjusts its focus across spatial and temporal dimensions, enabling precise identification of salient features. Simulations are conducted using Python and standard libraries to evaluate the model on the BRATS 2018, 2019, 2020 datasets and the Br235H dataset, encompassing diverse MRI scans with expert annotations. Extensive experimentation demonstrates 99.8% accuracy in detecting various tumor types, showcasing its potential to revolutionize diagnostic practices and improve patient outcomes.

Intratumoral and peritumoral ultrasound radiomics analysis for predicting HER2-low expression in HER2-negative breast cancer patients: a retrospective analysis of dual-central study.

Wang J, Gu Y, Zhan Y, Li R, Bi Y, Gao L, Wu X, Shao J, Chen Y, Ye L, Peng M

pubmed logopapersJun 5 2025
This study aims to explore whether intratumoral and peritumoral ultrasound radiomics of ultrasound images can predict the low expression status of human epidermal growth factor receptor 2 (HER2) in HER2-negative breast cancer patients. HER2-negative breast cancer patients were recruited retrospectively and randomly divided into a training cohort (n = 303) and a test cohort (n = 130) at a ratio of 7:3. The region of interest within the breast ultrasound image was designated as the intratumoral region, and expansions of 3 mm, 5 mm, and 8 mm from this region were considered as the peritumoral regions for the extraction of ultrasound radiomic features. Feature extraction and selection were performed, and radiomics scores (Rad-score) were obtained in four ultrasound radiomics scenarios: intratumoral only, intratumoral + peritumoral 3 mm, intratumoral + peritumoral 5 mm, and intratumoral + peritumoral 8 mm. An optimal combined nomogram radiomic model incorporating clinical features was established and validated. Subsequently, the diagnostic performance of the radiomic models was evaluated. The results indicated that the intratumoral + peritumoral (5 mm) ultrasound radiomics exhibited the excellent diagnostic performance in evaluated the HER2 low expression. The nomogram combining intratumoral + peritumoral (5 mm) and clinical features showed superior diagnostic performance, achieving an area under the curve (AUC) of 0.911 and 0.869 in the training and test cohorts, respectively. The combination of intratumoral + peritumoral (5 mm) ultrasound radiomics and clinical features possesses the capability to accurately predict the low-expression status of HER2 in HER2-negative breast cancer patients.

Clinical validation of a deep learning model for low-count PET image enhancement.

Long Q, Tian Y, Pan B, Xu Z, Zhang W, Xu L, Fan W, Pan T, Gong NJ

pubmed logopapersJun 5 2025
To investigate the effects of the deep learning model RaDynPET on fourfold reduced-count whole-body PET examinations. A total of 120 patients (84 internal cohorts and 36 external cohorts) undergoing <sup>18</sup>F-FDG PET/CT examinations were enrolled. PET images were reconstructed using OSEM algorithm with 120-s (G120) and 30-s (G30) list-mode data. RaDynPET was developed to generate enhanced images (R30) from G30. Two experienced nuclear medicine physicians independently evaluated subjective image quality using a 5-point Likert scale. Standardized uptake values (SUV), standard deviations, liver signal-to-noise ratio (SNR), lesion tumor-to-background ratio (TBR), and contrast-to-noise ratio (CNR) were compared. Subgroup analyses evaluated performance across demographics, and lesion detectability were evaluated using external datasets. RaDynPET was also compared to other deep learning methods. In internal cohorts, R30 demonstrated significantly higher image quality scores than G30 and G120. R30 showed excellent agreement with G120 for liver and lesion SUV values and surpassed G120 in liver SNR and CNR. Liver SNR and CNR of R30 were comparable to G120 in thin group, and the CNR of R30 was comparable to G120 in young age group. In external cohorts, R30 maintained strong SUV agreement with G120, with lesion-level sensitivity and specificity of 95.45% and 98.41%, respectively. There was no statistical difference in lesion detection between R30 and G120. RaDynPET achieved the highest PSNR and SSIM among deep learning methods. The RaDynPET model effectively restored high image quality while maintaining SUV agreement for <sup>18</sup>F-FDG PET scans acquired in 25% of the standard acquisition time.

Role of Large Language Models for Suggesting Nerve Involvement in Upper Limbs MRI Reports with Muscle Denervation Signs.

Martín-Noguerol T, López-Úbeda P, Luna A, Gómez-Río M, Górriz JM

pubmed logopapersJun 5 2025
Determining the involvement of specific peripheral nerves (PNs) in the upper limb associated with signs of muscle denervation can be challenging. This study aims to develop, compare, and validate various large language models (LLMs) to automatically identify and establish potential relationships between denervated muscles and their corresponding PNs. We collected 300 retrospective MRI reports in Spanish from upper limb examinations conducted between 2018 and 2024 that showed signs of muscle denervation. An expert radiologist manually annotated these reports based on the affected peripheral nerves (median, ulnar, radial, axillary, and suprascapular). BERT, DistilBERT, mBART, RoBERTa, and Medical-ELECTRA models were fine-tuned and evaluated on the reports. Additionally, an automatic voting system was implemented to consolidate predictions through majority voting. The voting system achieved the highest F1 scores for the median, ulnar, and radial nerves, with scores of 0.88, 1.00, and 0.90, respectively. Medical-ELECTRA also performed well, achieving F1 scores above 0.82 for the axillary and suprascapular nerves. In contrast, mBART demonstrated lower performance, particularly with an F1 score of 0.38 for the median nerve. Our voting system generally outperforms the individually tested LLMs in determining the specific PN likely associated with muscle denervation patterns detected in upper limb MRI reports. This system can thereby assist radiologists by suggesting the implicated PN when generating their radiology reports.

Ensemble of weak spectral total-variation learners: a PET-CT case study.

Rosenberg A, Kennedy J, Keidar Z, Zeevi YY, Gilboa G

pubmed logopapersJun 5 2025
Solving computer vision problems through machine learning, one often encounters lack of sufficient training data. To mitigate this, we propose the use of ensembles of weak learners based on spectral total-variation (STV) features (Gilboa G. 2014 A total variation spectral framework for scale and texture analysis. <i>SIAM J. Imaging Sci</i>. <b>7</b>, 1937-1961. (doi:10.1137/130930704)). The features are related to nonlinear eigenfunctions of the total-variation subgradient and can characterize well textures at various scales. It was shown (Burger M, Gilboa G, Moeller M, Eckardt L, Cremers D. 2016 Spectral decompositions using one-homogeneous functionals. <i>SIAM J. Imaging Sci</i>. <b>9</b>, 1374-1408. (doi:10.1137/15m1054687)) that, in the one-dimensional case, orthogonal features are generated, whereas in two dimensions the features are empirically lowly correlated. Ensemble learning theory advocates the use of lowly correlated weak learners. We thus propose here to design ensembles using learners based on STV features. To show the effectiveness of this paradigm, we examine a hard real-world medical imaging problem: the predictive value of computed tomography (CT) data for high uptake in positron emission tomography (PET) for patients suspected of skeletal metastases. The database consists of 457 scans with 1524 unique pairs of registered CT and PET slices. Our approach is compared with deep-learning methods and to radiomics features, showing STV learners perform best (AUC=[Formula: see text]), compared with neural nets (AUC=[Formula: see text]) and radiomics (AUC=[Formula: see text]). We observe that fine STV scales in CT images are especially indicative of the presence of high uptake in PET.This article is part of the theme issue 'Partial differential equations in data science'.

Are presentations of thoracic CT performed on admission to the ICU associated with mortality at day-90 in COVID-19 related ARDS?

Le Corre A, Maamar A, Lederlin M, Terzi N, Tadié JM, Gacouin A

pubmed logopapersJun 5 2025
Computed tomography (CT) analysis of lung morphology has significantly advanced our understanding of acute respiratory distress syndrome (ARDS). During the Coronavirus Disease 2019 (COVID-19) pandemic, CT imaging was widely utilized to evaluate lung injury and was suggested as a tool for predicting patient outcomes. However, data specifically focused on patients with ARDS admitted to intensive care units (ICUs) remain limited. This retrospective study analyzed patients admitted to ICUs between March 2020 and November 2022 with moderate to severe COVID-19 ARDS. All CT scans performed within 48 h of ICU admission were independently reviewed by three experts. Lung injury severity was quantified using the CT Severity Score (CT-SS; range 0-25). Patients were categorized as having severe disease (CT-SS ≥ 18) or non-severe disease (CT-SS < 18). The primary outcome was all-cause mortality at 90 days. Secondary outcomes included ICU mortality and medical complications during the ICU stay. Additionally, we evaluated a computer-assisted CT-score assessment using artificial intelligence software (CT Pneumonia Analysis<sup>®</sup>, SIEMENS Healthcare) to explore the feasibility of automated measurement and routine implementation. A total of 215 patients with moderate to severe COVID-19 ARDS were included. The median CT-SS at admission was 18/25 [interquartile range, 15-21]. Among them, 120 patients (56%) had a severe CT-SS (≥ 18), while 95 patients (44%) had a non-severe CT-SS (< 18). The 90-day mortality rates were 20.8% for the severe group and 15.8% for the non-severe group (p = 0.35). No significant association was observed between CT-SS severity and patient outcomes. In patients with moderate to severe COVID-19 ARDS, systematic CT assessment of lung parenchymal injury was not a reliable predictor of 90-day mortality or ICU-related complications.

Preoperative Prognosis Prediction for Pathological Stage IA Lung Adenocarcinoma: 3D-Based Consolidation Tumor Ratio is Superior to 2D-Based Consolidation Tumor Ratio.

Zhao L, Dong H, Chen Y, Wu F, Han C, Kuang P, Guan X, Xu X

pubmed logopapersJun 5 2025
The two-dimensional computed tomography measurement of the consolidation tumor ratio (2D-CTR) has limitations in the prognostic evaluation of early-stage lung adenocarcinoma: the measurement is subject to inter-observer variability and lacks spatial information, which undermines its reliability as a prognostic tool. This study aims to investigate the value of the three-dimensional volume-based CTR (3D-CTR) in preoperative prognosis prediction for pathological Stage IA lung adenocarcinoma, and compare its predictive performance with that of 2D-CTR. A retrospective cohort of 980 patients with pathological Stage IA lung adenocarcinoma who underwent surgery was included. Preoperative thin-section CT images were processed using artificial intelligence (AI) software for 3D segmentation. Tumor solid component volume was quantified using different density thresholds (-300 to -150 HU, in 50 HU intervals), and 3D-CTR was calculated. The optimal threshold associated with prognosis was selected using multivariate Cox regression. The predictive performance of 3D-CTR and 2D-CTR for recurrence-free survival (RFS) post-surgery was compared using receiver operating characteristic (ROC) curves, and the best cutoff value was determined. The integrated discrimination improvement (IDI) was utilized to assess the enhancement in predictive efficacy of 3D-CTR relative to 2D-CTR. Among traditional preoperative factors, 2D-CTR (cutoff value 0.54, HR=1.044, P=0.001) and carcinoembryonic antigen (CEA) were identified as independent prognostic factors for RFS. In 3D analysis, -150 HU was determined as the optimal threshold for distinguishing solid components from ground-glass opacity (GGO) components. The corresponding 3D-CTR (cutoff value 0.41, HR=1.033, P<0.001) was an independent risk factor for RFS. The predictive performance of 3D-CTR was significantly superior to that of 2D-CTR (AUC: 0.867 vs. 0.840, P=0.006), with a substantial enhancement in predictive capacity, as evidenced by an IDI of 0.038 (95% CI: 0.021-0.055, P<0.001). Kaplan-Meier analysis revealed that the 5-year RFS rate for the 3D-CTR >0.41 group was significantly lower than that of the ≤0.41 group (68.5% vs. 96.7%, P<0.001). The 3D-CTR based on a -150 HU density threshold provides a more accurate prediction of postoperative recurrence risk in pathological Stage IA lung adenocarcinoma, demonstrating superior performance compared to traditional 2D-CTR.

Comparative analysis of semantic-segmentation models for screen film mammograms.

Rani J, Singh J, Virmani J

pubmed logopapersJun 5 2025
Accurate segmentation of mammographic mass is very important as shape characteristics of these masses play a significant role for radiologist to diagnose benign and malignant cases. Recently, various deep learning segmentation algorithms have become popular for segmentation tasks. In the present work, rigorous performance analysis of ten semantic-segmentation models has been performed with 518 images taken from DDSM dataset (digital database for screening mammography) with 208 mass images ϵ BIRAD3, 150 mass images ϵ BIRAD4 and 160 mass images ϵ BIRAD5 classes, respectively. These models are (1) simple convolution series models namely- VGG16/VGG19, (2) simple convolution DAG (directed acyclic graph) models namely- U-Net (3) dilated convolution DAG models namely ResNet18/ResNet50/ShuffleNet/XceptionNet/InceptionV2/MobileNetV2 and (4) hybrid model, i.e. hybrid U-Net. On the basis of exhaustive experimentation, it was observed that dilated convolution DAG models namely- ResNet50, ShuffleNet and MobileNetV2 outperform other network models yielding cumulative JI and F1 score values of 0.87 and 0.92, 0.85 and 91, 0.84 and 0.90, respectively. The segmented images obtained by best performing models were subjectively analyzed by participating radiologist in terms of (a) size (b) margins and (c) shape characteristics. From objective and subjective analysis it was concluded that ResNet50 is the optimal model for segmentation of difficult to delineate breast masses with dense background and masses where both masses and micro-calcifications are simultaneously present. The result of the study indicates that ResNet50 model can be used in routine clinical environment for segmentation of mammographic masses.

A Machine Learning Method to Determine Candidates for Total and Unicompartmental Knee Arthroplasty Based on a Voting Mechanism.

Zhang N, Zhang L, Xiao L, Li Z, Hao Z

pubmed logopapersJun 5 2025
Knee osteoarthritis (KOA) is a prevalent condition. Accurate selection between total knee arthroplasty (TKA) and unicompartmental knee arthroplasty (UKA) is crucial for optimal treatment in patients who have end-stage KOA, particularly for improving clinical outcomes and reducing healthcare costs. This study proposes a machine learning model based on a voting mechanism to enhance the accuracy of surgical decision-making for KOA patients. Radiographic data were collected from a high-volume joint arthroplasty practice, focusing on anterior-posterior, lateral, and skyline X-ray views. The dataset included 277 TKA and 293 UKA cases, each labeled through intraoperative observations (indicating whether TKA or UKA was the appropriate choice). A five-fold cross-validation approach was used for training and validation. In the proposed method, three base models were first trained independently on single-view images, and a voting mechanism was implemented to aggregate model outputs. The performance of the proposed method was evaluated by using metrics such as accuracy and the area under the receiver operating characteristic curve (AUC). The proposed method achieved an accuracy of 94.2% and an AUC of 0.98%, demonstrating superior performance compared to existing models. The voting mechanism enabled base models to effectively utilize the detailed features from all three X-ray views, leading to enhanced predictive accuracy and model interpretability. This study provides a high-accuracy method for surgical decision-making between TKA and UKA for KOA patients, requiring only standard X-rays and offering potential for clinical application in automated referrals and preoperative planning.

Dual energy CT-based Radiomics for identification of myocardial focal scar and artificial beam-hardening.

Zeng L, Hu F, Qin P, Jia T, Lu L, Yang Z, Zhou X, Qiu Y, Luo L, Chen B, Jin L, Tang W, Wang Y, Zhou F, Liu T, Wang A, Zhou Z, Guo X, Zheng Z, Fan X, Xu J, Xiao L, Liu Q, Guan W, Chen F, Wang J, Li S, Chen J, Pan C

pubmed logopapersJun 5 2025
Computed tomography is an inadequate method for detecting myocardial focal scar (MFS) due to its moderate density resolution, which is insufficient for distinguishing MFS from artificial beam-hardening (BH). Virtual monochromatic images (VMIs) of dual-energy coronary computed tomography angiography (DECCTA) provide a variety of diagnostic information with significant potential for detecting myocardial lesions. The aim of this study was to assess whether radiomics analysis in VMIs of DECCTA can help distinguish MFS from BH. A prospective cohort of patients who were suspected with an old myocardial infarction was assembled at two different centers between Janurary 2021 and June 2024. MFS and BH segmentation and radiomics feature extraction and selection were performed on VMIs images, and four machine learning classifiers were constructed using selected strongest features. Subsequently, an independent validation was conducted, and a subjective diagnosis of the validation set was provided by an radiologist. The AUC was used to assess the performance of the radiomics models. The training set included 57 patients from center 1 (mean age, 54 years +/- 9, 55 men), and the external validation set included 10 patients from center 2 (mean age, 59 years +/- 10, 9 men). The radiomics models exhibited the highest AUC value of 0.937 (expressed at 130 keV VMIs), while the radiologist demonstrated the highest AUC value of 0.734 (expressed at 40 keV VMIs). The integration of radiomic features derived from VMIs of DECCTA with machine learning algorithms has the potential to improve the efficiency of distinguishing MFS from BH.
Page 109 of 2212205 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.