Sort by:
Page 91 of 3143139 results

The MSA Atrophy Index (MSA-AI): An Imaging Marker for Diagnosis and Clinical Progression in Multiple System Atrophy.

Trujillo P, Hett K, Cooper A, Brown AE, Iregui J, Donahue MJ, Landman ME, Biaggioni I, Bradbury M, Wong C, Stamler D, Claassen DO

pubmed logopapersJul 14 2025
Reliable biomarkers are essential for tracking disease progression and advancing treatments for multiple system atrophy (MSA). In this study, we propose the MSA Atrophy Index (MSA-AI), a novel composite volumetric measure to distinguish MSA from related disorders and monitor disease progression. Seventeen participants with an initial diagnosis of probable MSA were enrolled in the longitudinal bioMUSE study and underwent 3T MRI, biofluid analysis, and clinical assessments at baseline, 6, and 12 months. Final diagnoses were determined after 12 months using clinical progression, imaging, and fluid biomarkers. Ten participants retained an MSA diagnosis, while five were reclassified as either Parkinson disease (PD, n = 4) or dementia with Lewy bodies (DLB, n = 1). Cross-sectional comparisons included additional MSA cases (n = 26), healthy controls (n = 23), pure autonomic failure (n = 23), PD (n = 56), and DLB (n = 8). Lentiform nucleus, cerebellum, and brainstem volumes were extracted using deep learning-based segmentation. Z-scores were computed using a normative dataset (n = 469) and integrated into the MSA-AI. Group differences were tested with linear regression; longitudinal changes and clinical correlations were assessed using mixed-effects models and Spearman correlations. MSA patients exhibited significantly lower MSA-AI scores compared to all other diagnostic groups (p < 0.001). The MSA-AI effectively distinguished MSA from related synucleinopathies, correlated with baseline clinical severity (ρ = -0.57, p < 0.001), and predicted disease progression (ρ = -0.55, p = 0.03). Longitudinal reductions in MSA-AI were associated with worsening clinical scores over 12 months (ρ = -0.61, p = 0.01). The MSA-AI is a promising imaging biomarker for diagnosis and monitoring disease progression in MSA. These findings require validation in larger, independent cohorts.

A radiomics-clinical predictive model for difficult laparoscopic cholecystectomy based on preoperative CT imaging: a retrospective single center study.

Sun RT, Li CL, Jiang YM, Hao AY, Liu K, Li K, Tan B, Yang XN, Cui JF, Bai WY, Hu WY, Cao JY, Qu C

pubmed logopapersJul 14 2025
Accurately identifying difficult laparoscopic cholecystectomy (DLC) preoperatively remains a clinical challenge. Previous studies utilizing clinical variables or morphological imaging markers have demonstrated suboptimal predictive performance. This study aims to develop an optimal radiomics-clinical model by integrating preoperative CT-based radiomics features with clinical characteristics. A retrospective analysis was conducted on 2,055 patients who underwent laparoscopic cholecystectomy (LC) for cholecystitis at our center. Preoperative CT images were processed with super-resolution reconstruction to improve consistency, and high-throughput radiomic features were extracted from the gallbladder wall region. A combination of radiomic and clinical features was selected using the Boruta-LASSO algorithm. Predictive models were constructed using six machine learning algorithms and validated, with model performance evaluated based on the AUC, accuracy, Brier score, and DCA to identify the optimal model. Model interpretability was further enhanced using the SHAP method. The Boruta-LASSO algorithm identified 10 key radiomic and clinical features for model construction, including the Rad-Score, gallbladder wall thickness, fibrinogen, C-reactive protein, and low-density lipoprotein cholesterol. Among the six machine learning models developed, the radiomics-clinical model based on the random forest algorithm demonstrated the best predictive performance, with an AUC of 0.938 in the training cohort and 0.874 in the validation cohort. The Brier score, calibration curve, and DCA confirmed the superior predictive capability of this model, significantly outperforming previously published models. The SHAP analysis further visualized the importance of features, enhancing model interpretability. This study developed the first radiomics-clinical random forest model for the preoperative prediction of DLC by machine learning algorithms. This predictive model supports safer and individualized surgical planning and treatment strategies.

Classification of Renal Lesions by Leveraging Hybrid Features from CT Images Using Machine Learning Techniques.

Kaur R, Khattar S, Singla S

pubmed logopapersJul 14 2025
Renal cancer is amid the several reasons of increasing mortality rates globally, which can be reduced by early detection and diagnosis. The classification of lesions is based mostly on their characteristics, which include varied shape and texture properties. Computed tomography (CT) imaging is a regularly used imaging modality for study of the renal soft tissues. Furthermore, a radiologist's ability to assess a corpus of CT images is limited, which can lead to misdiagnosis of kidney lesions, which might lead to cancer progression or unnecessary chemotherapy. To address these challenges, this study presents a machine learning technique based on a novel feature vector for the automated classification of renal lesions using a multi-model texture-based feature extraction. The proposed feature vector could serve as an integral component in improving the accuracy of a computer aided diagnosis (CAD) system for identifying the texture of renal lesion and can assist physicians in order to provide more precise lesion interpretation. In this work, the authors employed different texture models for the analysis of CT scans, in order to classify benign and malignant kidney lesions. Texture analysis is performed using features such as first-order statistics (FoS), spatial gray level co-occurrence matrix (SGLCM), Fourier power spectrum (FPS), statistical feature matrix (SFM), Law's texture energy measures (TEM), gray level difference statistics (GLDS), fractal, and neighborhood gray tone difference matrix (NGTDM). Multiple texture models were utilized to quantify the renal texture patterns, which used image texture analysis on a selected region of interest (ROI) from the renal lesions. In addition, dimensionality reduction is employed to discover the most discriminative features for categorization of benign and malignant lesions, and a unique feature vector based on correlation-based feature selection, information gain, and gain ratio is proposed. Different machine learning-based classifiers were employed to test the performance of the proposed features, out of which the random forest (RF) model outperforms all other techniques to distinguish benign from malignant tumors in terms of distinct performance evaluation metrics. The final feature set is evaluated using various machine learning classifiers, with the RF model achieving the highest performance. The proposed system is validated on a dataset of 50 subjects, achieving a classification accuracy of 95.8%, outperforming other conventional models.

A hybrid learning approach for MRI-based detection of alzheimer's disease stages using dual CNNs and ensemble classifier.

Zolfaghari S, Joudaki A, Sarbaz Y

pubmed logopapersJul 14 2025
Alzheimer's Disease (AD) and related dementias are significant global health issues characterized by progressive cognitive decline and memory loss. Computer-aided systems can help physicians in the early and accurate detection of AD, enabling timely intervention and effective management. This study presents a combination of two parallel Convolutional Neural Networks (CNNs) and an ensemble learning method for classifying AD stages using Magnetic Resonance Imaging (MRI) data. Initially, these images were resized and augmented before being input into Network 1 and Network 2, which have different structures and layers to extract important features. These features were then fused and fed into an ensemble learning classifier containing Support Vector Machine, Random Forest, and K-Nearest Neighbors, with hyperparameters optimized by the Grid Search Cross-Validation technique. Considering distinct Network 1 and Network 2 along with ensemble learning, four classes were identified with accuracies of 95.16% and 97.97%, respectively. However, using the derived features from both networks resulted in an acceptable classification accuracy of 99.06%. These findings imply the potential of the proposed hybrid approach in the classification of AD stages. As the evaluation was conducted at the slice-level using a Kaggle dataset, additional subject-level validation and clinical testing are required to determine its real-world applicability.

A generative model uses healthy and diseased image pairs for pixel-level chest X-ray pathology localization.

Dong K, Cheng Y, He K, Suo J

pubmed logopapersJul 14 2025
Medical artificial intelligence (AI) offers potential for automatic pathological interpretation, but a practicable AI model demands both pixel-level accuracy and high explainability for diagnosis. The construction of such models relies on substantial training data with fine-grained labelling, which is impractical in real applications. To circumvent this barrier, we propose a prompt-driven constrained generative model to produce anatomically aligned healthy and diseased image pairs and learn a pathology localization model in a supervised manner. This paradigm provides high-fidelity labelled data and addresses the lack of chest X-ray images with labelling at fine scales. Benefitting from the emerging text-driven generative model and the incorporated constraint, our model presents promising localization accuracy of subtle pathologies, high explainability for clinical decisions, and good transferability to many unseen pathological categories such as new prompts and mixed pathologies. These advantageous features establish our model as a promising solution to assist chest X-ray analysis. In addition, the proposed approach is also inspiring for other tasks lacking massive training data and time-consuming manual labelling.

Predicting the molecular subtypes of 2021 WHO grade 4 glioma by a multiparametric MRI-based machine learning model.

Xu W, Li Y, Zhang J, Zhang Z, Shen P, Wang X, Yang G, Du J, Zhang H, Tan Y

pubmed logopapersJul 14 2025
Accurately distinguishing the different molecular subtypes of 2021 World Health Organization (WHO) grade 4 Central Nervous System (CNS) gliomas is highly relevant for prognostic stratification and personalized treatment. To develop and validate a machine learning (ML) model using multiparametric MRI for the preoperative differentiation of astrocytoma, CNS WHO grade 4, and glioblastoma (GBM), isocitrate dehydrogenase-wild-type (IDH-wt) (WHO 2021) (Task 1:grade 4 vs. GBM); and to stratify astrocytoma, CNS WHO grade 4, by distinguish astrocytoma, IDH-mutant (IDH-mut), CNS WHO grade 4 from astrocytoma, IDH-wild-type (IDH-wt), CNS WHO grade 4 (Task 2:IDH-mut <sup>grade 4</sup> vs. IDH-wt <sup>grade 4</sup>). Additionally, to evaluate the model's prognostic value. We retrospectively analyzed 320 glioma patients from three hospitals (training/testing, 7:3 ratio) and 99 patients from ‌The Cancer Genome Atlas (TCGA) database for external validation‌. Radiomic features were extracted from tumor and edema on contrast-enhanced T1-weighted imaging (CE-T1WI) and T2 fluid-attenuated inversion recovery (T2-FLAIR). Extreme gradient boosting (XGBoost) was utilized for constructing the ML, clinical, and combined models. Model performance was evaluated with receiver operating characteristic (ROC) curves, decision curves, and calibration curves. Stability was evaluated using six additional classifiers. Kaplan-Meier (KM) survival analysis and the log-rank test assessed the model's prognostic value. In Task 1 and Task 2, the combined model (AUC = 0.907, 0.852 and 0.830 for Task 1; AUC = 0.899, 0.895 and 0.792 for Task 2) and the optimal ML model (AUC = 0.902, 0.854 and 0.832 for Task 1; AUC = 0.904, 0.899 and 0.783 for Task 2) significantly outperformed the clinical model (AUC = 0.671, 0.656, and 0.543 for Task 1; AUC = 0.619, 0.605 and 0.400 for Task 2) in both the training, testing and validation sets. Survival analysis showed the combined model performed similarly to molecular subtype in both tasks (p = 0.964 and p = 0.746). The multiparametric MRI ML model effectively distinguished astrocytoma, CNS WHO grade 4 from GBM, IDH-wt (WHO 2021) and differentiated astrocytoma, IDH-mut from astrocytoma, IDH-wt, CNS WHO grade 4. Additionally, the model provided reliable survival stratification for glioma patients across different molecular subtypes.

ESE and Transfer Learning for Breast Tumor Classification.

He Y, Batumalay M, Thinakaran R

pubmed logopapersJul 14 2025
In this study, we proposed a lightweight neural network architecture based on inverted residual network, efficient squeeze excitation (ESE) module, and double transfer learning, called TLese-ResNet, for breast cancer molecular subtype recognition. The inverted ResNet reduces the number of network parameters while enhancing the cross-layer gradient propagation and feature expression capabilities. The introduction of the ESE module reduces the network complexity while maintaining the channel relationship collection. The dataset of this study comes from the mammography images of patients diagnosed with invasive breast cancer in a hospital in Jiangxi. The dataset comprises preoperative mammography images with CC and MLO views. Given that the dataset is somewhat small, in addition to the commonly used data augmentation methods, double transfer learning is also used. Double transfer learning includes the first transfer, in which the source domain is ImageNet and the target domain is the COVID-19 chest X-ray image dataset, and the second transfer, in which the source domain is the target domain of the first transfer, and the target domain is the mammography dataset we collected. By using five-fold cross-validation, the mean accuracy and area under received surgery feature on mammographic images of CC and MLO views were 0.818 and 0.883, respectively, outperforming other state-of-the-art deep learning-based models such as ResNet-50 and DenseNet-121. Therefore, the proposed model can provide clinicians with an effective and non-invasive auxiliary tool for molecular subtype identification of breast cancer.

X-ray2CTPA: leveraging diffusion models to enhance pulmonary embolism classification.

Cahan N, Klang E, Aviram G, Barash Y, Konen E, Giryes R, Greenspan H

pubmed logopapersJul 14 2025
Chest X-rays or chest radiography (CXR), commonly used for medical diagnostics, typically enables limited imaging compared to computed tomography (CT) scans, which offer more detailed and accurate three-dimensional data, particularly contrast-enhanced scans like CT Pulmonary Angiography (CTPA). However, CT scans entail higher costs, greater radiation exposure, and are less accessible than CXRs. In this work, we explore cross-modal translation from a 2D low contrast-resolution X-ray input to a 3D high contrast and spatial-resolution CTPA scan. Driven by recent advances in generative AI, we introduce a novel diffusion-based approach to this task. We employ the synthesized 3D images in a classification framework and show improved AUC in a Pulmonary Embolism (PE) categorization task, using the initial CXR input. Furthermore, we evaluate the model's performance using quantitative metrics, ensuring diagnostic relevance of the generated images. The proposed method is generalizable and capable of performing additional cross-modality translations in medical imaging. It may pave the way for more accessible and cost-effective advanced diagnostic tools. The code for this project is available: https://github.com/NoaCahan/X-ray2CTPA .

Generative AI enables medical image segmentation in ultra low-data regimes.

Zhang L, Jindal B, Alaa A, Weinreb R, Wilson D, Segal E, Zou J, Xie P

pubmed logopapersJul 14 2025
Semantic segmentation of medical images is pivotal in applications like disease diagnosis and treatment planning. While deep learning automates this task effectively, it struggles in ultra low-data regimes for the scarcity of annotated segmentation masks. To address this, we propose a generative deep learning framework that produces high-quality image-mask pairs as auxiliary training data. Unlike traditional generative models that separate data generation from model training, ours uses multi-level optimization for end-to-end data generation. This allows segmentation performance to guide the generation process, producing data tailored to improve segmentation outcomes. Our method demonstrates strong generalization across 11 medical image segmentation tasks and 19 datasets, covering various diseases, organs, and modalities. It improves performance by 10-20% (absolute) in both same- and out-of-domain settings and requires 8-20 times less training data than existing approaches. This greatly enhances the feasibility and cost-effectiveness of deep learning in data-limited medical imaging scenarios.

Deep Learning Applications in Lymphoma Imaging.

Sorin V, Cohen I, Lekach R, Partovi S, Raskin D

pubmed logopapersJul 14 2025
Lymphomas are a diverse group of disorders characterized by the clonal proliferation of lymphocytes. While definitive diagnosis of lymphoma relies on histopathology, immune-phenotyping and additional molecular analyses, imaging modalities such as PET/CT, CT, and MRI play a central role in the diagnostic process and management, from assessing disease extent, to evaluation of response to therapy and detecting recurrence. Artificial intelligence (AI), particularly deep learning models like convolutional neural networks (CNNs), is transforming lymphoma imaging by enabling automated detection, segmentation, and classification. This review elaborates on recent advancements in deep learning for lymphoma imaging and its integration into clinical practice. Challenges include obtaining high-quality, annotated datasets, addressing biases in training data, and ensuring consistent model performance. Ongoing efforts are focused on enhancing model interpretability, incorporating diverse patient populations to improve generalizability, and ensuring safe and effective integration of AI into clinical workflows, with the goal of improving patient outcomes.
Page 91 of 3143139 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.